LoLiSRFlow: Joint Single Image Low-light Enhancement and Super-resolution via Cross-scale Transformer-based Conditional Flow (2402.18871v1)
Abstract: The visibility of real-world images is often limited by both low-light and low-resolution, however, these issues are only addressed in the literature through Low-Light Enhancement (LLE) and Super- Resolution (SR) methods. Admittedly, a simple cascade of these approaches cannot work harmoniously to cope well with the highly ill-posed problem for simultaneously enhancing visibility and resolution. In this paper, we propose a normalizing flow network, dubbed LoLiSRFLow, specifically designed to consider the degradation mechanism inherent in joint LLE and SR. To break the bonds of the one-to-many mapping for low-light low-resolution images to normal-light high-resolution images, LoLiSRFLow directly learns the conditional probability distribution over a variety of feasible solutions for high-resolution well-exposed images. Specifically, a multi-resolution parallel transformer acts as a conditional encoder that extracts the Retinex-induced resolution-and-illumination invariant map as the previous one. And the invertible network maps the distribution of usually exposed high-resolution images to a latent distribution. The backward inference is equivalent to introducing an additional constrained loss for the normal training route, thus enabling the manifold of the natural exposure of the high-resolution image to be immaculately depicted. We also propose a synthetic dataset modeling the realistic low-light low-resolution degradation, named DFSR-LLE, containing 7100 low-resolution dark-light/high-resolution normal sharp pairs. Quantitative and qualitative experimental results demonstrate the effectiveness of our method on both the proposed synthetic and real datasets.
- Zhang, H., Liu, D., Xiong, Z.: Two-stream action recognition-oriented video super-resolution. In: ICCV, pp. 8799–8808 (2019) Smeulders et al. [2013] Smeulders, A.W., Chu, D.M., Cucchiara, R., Calderara, S., Dehghan, A., Shah, M.: Visual tracking: An experimental survey. IEEE transactions on pattern analysis and machine intelligence 36(7), 1442–1468 (2013) Liu et al. [2021] Liu, R., Ma, L., Zhang, J., Fan, X., Luo, Z.: Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In: CVPR, pp. 10561–10570 (2021) Guo et al. [2020] Guo, C., Li, C., Guo, J., Loy, C.C., Hou, J., Kwong, S., Cong, R.: Zero-reference deep curve estimation for low-light image enhancement. In: CVPR, pp. 1780–1789 (2020) Guo et al. [2016] Guo, X., Li, Y., Ling, H.: Lime: Low-light image enhancement via illumination map estimation. TIP 26(2), 982–993 (2016) Lugmayr et al. [2020] Lugmayr, A., Danelljan, M., Gool, L.V., Timofte, R.: Srflow: Learning the super-resolution space with normalizing flow. In: ECCV, pp. 715–732 (2020). Springer Cai et al. [2019] Cai, J., Zeng, H., Yong, H., Cao, Z., Zhang, L.: Toward real-world single image super-resolution: A new benchmark and a new model. In: ICCV, pp. 3086–3095 (2019) Wang et al. [2018] Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Smeulders, A.W., Chu, D.M., Cucchiara, R., Calderara, S., Dehghan, A., Shah, M.: Visual tracking: An experimental survey. IEEE transactions on pattern analysis and machine intelligence 36(7), 1442–1468 (2013) Liu et al. [2021] Liu, R., Ma, L., Zhang, J., Fan, X., Luo, Z.: Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In: CVPR, pp. 10561–10570 (2021) Guo et al. [2020] Guo, C., Li, C., Guo, J., Loy, C.C., Hou, J., Kwong, S., Cong, R.: Zero-reference deep curve estimation for low-light image enhancement. In: CVPR, pp. 1780–1789 (2020) Guo et al. [2016] Guo, X., Li, Y., Ling, H.: Lime: Low-light image enhancement via illumination map estimation. TIP 26(2), 982–993 (2016) Lugmayr et al. [2020] Lugmayr, A., Danelljan, M., Gool, L.V., Timofte, R.: Srflow: Learning the super-resolution space with normalizing flow. In: ECCV, pp. 715–732 (2020). Springer Cai et al. [2019] Cai, J., Zeng, H., Yong, H., Cao, Z., Zhang, L.: Toward real-world single image super-resolution: A new benchmark and a new model. In: ICCV, pp. 3086–3095 (2019) Wang et al. [2018] Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liu, R., Ma, L., Zhang, J., Fan, X., Luo, Z.: Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In: CVPR, pp. 10561–10570 (2021) Guo et al. [2020] Guo, C., Li, C., Guo, J., Loy, C.C., Hou, J., Kwong, S., Cong, R.: Zero-reference deep curve estimation for low-light image enhancement. In: CVPR, pp. 1780–1789 (2020) Guo et al. [2016] Guo, X., Li, Y., Ling, H.: Lime: Low-light image enhancement via illumination map estimation. TIP 26(2), 982–993 (2016) Lugmayr et al. [2020] Lugmayr, A., Danelljan, M., Gool, L.V., Timofte, R.: Srflow: Learning the super-resolution space with normalizing flow. In: ECCV, pp. 715–732 (2020). Springer Cai et al. [2019] Cai, J., Zeng, H., Yong, H., Cao, Z., Zhang, L.: Toward real-world single image super-resolution: A new benchmark and a new model. In: ICCV, pp. 3086–3095 (2019) Wang et al. [2018] Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, C., Li, C., Guo, J., Loy, C.C., Hou, J., Kwong, S., Cong, R.: Zero-reference deep curve estimation for low-light image enhancement. In: CVPR, pp. 1780–1789 (2020) Guo et al. [2016] Guo, X., Li, Y., Ling, H.: Lime: Low-light image enhancement via illumination map estimation. TIP 26(2), 982–993 (2016) Lugmayr et al. [2020] Lugmayr, A., Danelljan, M., Gool, L.V., Timofte, R.: Srflow: Learning the super-resolution space with normalizing flow. In: ECCV, pp. 715–732 (2020). Springer Cai et al. [2019] Cai, J., Zeng, H., Yong, H., Cao, Z., Zhang, L.: Toward real-world single image super-resolution: A new benchmark and a new model. In: ICCV, pp. 3086–3095 (2019) Wang et al. [2018] Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, X., Li, Y., Ling, H.: Lime: Low-light image enhancement via illumination map estimation. TIP 26(2), 982–993 (2016) Lugmayr et al. [2020] Lugmayr, A., Danelljan, M., Gool, L.V., Timofte, R.: Srflow: Learning the super-resolution space with normalizing flow. In: ECCV, pp. 715–732 (2020). Springer Cai et al. [2019] Cai, J., Zeng, H., Yong, H., Cao, Z., Zhang, L.: Toward real-world single image super-resolution: A new benchmark and a new model. In: ICCV, pp. 3086–3095 (2019) Wang et al. [2018] Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lugmayr, A., Danelljan, M., Gool, L.V., Timofte, R.: Srflow: Learning the super-resolution space with normalizing flow. In: ECCV, pp. 715–732 (2020). Springer Cai et al. [2019] Cai, J., Zeng, H., Yong, H., Cao, Z., Zhang, L.: Toward real-world single image super-resolution: A new benchmark and a new model. In: ICCV, pp. 3086–3095 (2019) Wang et al. [2018] Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Cai, J., Zeng, H., Yong, H., Cao, Z., Zhang, L.: Toward real-world single image super-resolution: A new benchmark and a new model. In: ICCV, pp. 3086–3095 (2019) Wang et al. [2018] Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Smeulders, A.W., Chu, D.M., Cucchiara, R., Calderara, S., Dehghan, A., Shah, M.: Visual tracking: An experimental survey. IEEE transactions on pattern analysis and machine intelligence 36(7), 1442–1468 (2013) Liu et al. [2021] Liu, R., Ma, L., Zhang, J., Fan, X., Luo, Z.: Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In: CVPR, pp. 10561–10570 (2021) Guo et al. [2020] Guo, C., Li, C., Guo, J., Loy, C.C., Hou, J., Kwong, S., Cong, R.: Zero-reference deep curve estimation for low-light image enhancement. In: CVPR, pp. 1780–1789 (2020) Guo et al. [2016] Guo, X., Li, Y., Ling, H.: Lime: Low-light image enhancement via illumination map estimation. TIP 26(2), 982–993 (2016) Lugmayr et al. [2020] Lugmayr, A., Danelljan, M., Gool, L.V., Timofte, R.: Srflow: Learning the super-resolution space with normalizing flow. In: ECCV, pp. 715–732 (2020). Springer Cai et al. [2019] Cai, J., Zeng, H., Yong, H., Cao, Z., Zhang, L.: Toward real-world single image super-resolution: A new benchmark and a new model. In: ICCV, pp. 3086–3095 (2019) Wang et al. [2018] Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liu, R., Ma, L., Zhang, J., Fan, X., Luo, Z.: Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In: CVPR, pp. 10561–10570 (2021) Guo et al. [2020] Guo, C., Li, C., Guo, J., Loy, C.C., Hou, J., Kwong, S., Cong, R.: Zero-reference deep curve estimation for low-light image enhancement. In: CVPR, pp. 1780–1789 (2020) Guo et al. [2016] Guo, X., Li, Y., Ling, H.: Lime: Low-light image enhancement via illumination map estimation. TIP 26(2), 982–993 (2016) Lugmayr et al. [2020] Lugmayr, A., Danelljan, M., Gool, L.V., Timofte, R.: Srflow: Learning the super-resolution space with normalizing flow. In: ECCV, pp. 715–732 (2020). Springer Cai et al. [2019] Cai, J., Zeng, H., Yong, H., Cao, Z., Zhang, L.: Toward real-world single image super-resolution: A new benchmark and a new model. In: ICCV, pp. 3086–3095 (2019) Wang et al. [2018] Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, C., Li, C., Guo, J., Loy, C.C., Hou, J., Kwong, S., Cong, R.: Zero-reference deep curve estimation for low-light image enhancement. In: CVPR, pp. 1780–1789 (2020) Guo et al. [2016] Guo, X., Li, Y., Ling, H.: Lime: Low-light image enhancement via illumination map estimation. TIP 26(2), 982–993 (2016) Lugmayr et al. [2020] Lugmayr, A., Danelljan, M., Gool, L.V., Timofte, R.: Srflow: Learning the super-resolution space with normalizing flow. In: ECCV, pp. 715–732 (2020). Springer Cai et al. [2019] Cai, J., Zeng, H., Yong, H., Cao, Z., Zhang, L.: Toward real-world single image super-resolution: A new benchmark and a new model. In: ICCV, pp. 3086–3095 (2019) Wang et al. [2018] Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, X., Li, Y., Ling, H.: Lime: Low-light image enhancement via illumination map estimation. TIP 26(2), 982–993 (2016) Lugmayr et al. [2020] Lugmayr, A., Danelljan, M., Gool, L.V., Timofte, R.: Srflow: Learning the super-resolution space with normalizing flow. In: ECCV, pp. 715–732 (2020). Springer Cai et al. [2019] Cai, J., Zeng, H., Yong, H., Cao, Z., Zhang, L.: Toward real-world single image super-resolution: A new benchmark and a new model. In: ICCV, pp. 3086–3095 (2019) Wang et al. [2018] Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lugmayr, A., Danelljan, M., Gool, L.V., Timofte, R.: Srflow: Learning the super-resolution space with normalizing flow. In: ECCV, pp. 715–732 (2020). Springer Cai et al. [2019] Cai, J., Zeng, H., Yong, H., Cao, Z., Zhang, L.: Toward real-world single image super-resolution: A new benchmark and a new model. In: ICCV, pp. 3086–3095 (2019) Wang et al. [2018] Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Cai, J., Zeng, H., Yong, H., Cao, Z., Zhang, L.: Toward real-world single image super-resolution: A new benchmark and a new model. In: ICCV, pp. 3086–3095 (2019) Wang et al. [2018] Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Liu, R., Ma, L., Zhang, J., Fan, X., Luo, Z.: Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In: CVPR, pp. 10561–10570 (2021) Guo et al. [2020] Guo, C., Li, C., Guo, J., Loy, C.C., Hou, J., Kwong, S., Cong, R.: Zero-reference deep curve estimation for low-light image enhancement. In: CVPR, pp. 1780–1789 (2020) Guo et al. [2016] Guo, X., Li, Y., Ling, H.: Lime: Low-light image enhancement via illumination map estimation. TIP 26(2), 982–993 (2016) Lugmayr et al. [2020] Lugmayr, A., Danelljan, M., Gool, L.V., Timofte, R.: Srflow: Learning the super-resolution space with normalizing flow. In: ECCV, pp. 715–732 (2020). Springer Cai et al. [2019] Cai, J., Zeng, H., Yong, H., Cao, Z., Zhang, L.: Toward real-world single image super-resolution: A new benchmark and a new model. In: ICCV, pp. 3086–3095 (2019) Wang et al. [2018] Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, C., Li, C., Guo, J., Loy, C.C., Hou, J., Kwong, S., Cong, R.: Zero-reference deep curve estimation for low-light image enhancement. In: CVPR, pp. 1780–1789 (2020) Guo et al. [2016] Guo, X., Li, Y., Ling, H.: Lime: Low-light image enhancement via illumination map estimation. TIP 26(2), 982–993 (2016) Lugmayr et al. [2020] Lugmayr, A., Danelljan, M., Gool, L.V., Timofte, R.: Srflow: Learning the super-resolution space with normalizing flow. In: ECCV, pp. 715–732 (2020). Springer Cai et al. [2019] Cai, J., Zeng, H., Yong, H., Cao, Z., Zhang, L.: Toward real-world single image super-resolution: A new benchmark and a new model. In: ICCV, pp. 3086–3095 (2019) Wang et al. [2018] Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, X., Li, Y., Ling, H.: Lime: Low-light image enhancement via illumination map estimation. TIP 26(2), 982–993 (2016) Lugmayr et al. [2020] Lugmayr, A., Danelljan, M., Gool, L.V., Timofte, R.: Srflow: Learning the super-resolution space with normalizing flow. In: ECCV, pp. 715–732 (2020). Springer Cai et al. [2019] Cai, J., Zeng, H., Yong, H., Cao, Z., Zhang, L.: Toward real-world single image super-resolution: A new benchmark and a new model. In: ICCV, pp. 3086–3095 (2019) Wang et al. [2018] Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lugmayr, A., Danelljan, M., Gool, L.V., Timofte, R.: Srflow: Learning the super-resolution space with normalizing flow. In: ECCV, pp. 715–732 (2020). Springer Cai et al. [2019] Cai, J., Zeng, H., Yong, H., Cao, Z., Zhang, L.: Toward real-world single image super-resolution: A new benchmark and a new model. In: ICCV, pp. 3086–3095 (2019) Wang et al. [2018] Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Cai, J., Zeng, H., Yong, H., Cao, Z., Zhang, L.: Toward real-world single image super-resolution: A new benchmark and a new model. In: ICCV, pp. 3086–3095 (2019) Wang et al. [2018] Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Guo, C., Li, C., Guo, J., Loy, C.C., Hou, J., Kwong, S., Cong, R.: Zero-reference deep curve estimation for low-light image enhancement. In: CVPR, pp. 1780–1789 (2020) Guo et al. [2016] Guo, X., Li, Y., Ling, H.: Lime: Low-light image enhancement via illumination map estimation. TIP 26(2), 982–993 (2016) Lugmayr et al. [2020] Lugmayr, A., Danelljan, M., Gool, L.V., Timofte, R.: Srflow: Learning the super-resolution space with normalizing flow. In: ECCV, pp. 715–732 (2020). Springer Cai et al. [2019] Cai, J., Zeng, H., Yong, H., Cao, Z., Zhang, L.: Toward real-world single image super-resolution: A new benchmark and a new model. In: ICCV, pp. 3086–3095 (2019) Wang et al. [2018] Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, X., Li, Y., Ling, H.: Lime: Low-light image enhancement via illumination map estimation. TIP 26(2), 982–993 (2016) Lugmayr et al. [2020] Lugmayr, A., Danelljan, M., Gool, L.V., Timofte, R.: Srflow: Learning the super-resolution space with normalizing flow. In: ECCV, pp. 715–732 (2020). Springer Cai et al. [2019] Cai, J., Zeng, H., Yong, H., Cao, Z., Zhang, L.: Toward real-world single image super-resolution: A new benchmark and a new model. In: ICCV, pp. 3086–3095 (2019) Wang et al. [2018] Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lugmayr, A., Danelljan, M., Gool, L.V., Timofte, R.: Srflow: Learning the super-resolution space with normalizing flow. In: ECCV, pp. 715–732 (2020). Springer Cai et al. [2019] Cai, J., Zeng, H., Yong, H., Cao, Z., Zhang, L.: Toward real-world single image super-resolution: A new benchmark and a new model. In: ICCV, pp. 3086–3095 (2019) Wang et al. [2018] Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Cai, J., Zeng, H., Yong, H., Cao, Z., Zhang, L.: Toward real-world single image super-resolution: A new benchmark and a new model. In: ICCV, pp. 3086–3095 (2019) Wang et al. [2018] Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Guo, X., Li, Y., Ling, H.: Lime: Low-light image enhancement via illumination map estimation. TIP 26(2), 982–993 (2016) Lugmayr et al. [2020] Lugmayr, A., Danelljan, M., Gool, L.V., Timofte, R.: Srflow: Learning the super-resolution space with normalizing flow. In: ECCV, pp. 715–732 (2020). Springer Cai et al. [2019] Cai, J., Zeng, H., Yong, H., Cao, Z., Zhang, L.: Toward real-world single image super-resolution: A new benchmark and a new model. In: ICCV, pp. 3086–3095 (2019) Wang et al. [2018] Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lugmayr, A., Danelljan, M., Gool, L.V., Timofte, R.: Srflow: Learning the super-resolution space with normalizing flow. In: ECCV, pp. 715–732 (2020). Springer Cai et al. [2019] Cai, J., Zeng, H., Yong, H., Cao, Z., Zhang, L.: Toward real-world single image super-resolution: A new benchmark and a new model. In: ICCV, pp. 3086–3095 (2019) Wang et al. [2018] Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Cai, J., Zeng, H., Yong, H., Cao, Z., Zhang, L.: Toward real-world single image super-resolution: A new benchmark and a new model. In: ICCV, pp. 3086–3095 (2019) Wang et al. [2018] Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Lugmayr, A., Danelljan, M., Gool, L.V., Timofte, R.: Srflow: Learning the super-resolution space with normalizing flow. In: ECCV, pp. 715–732 (2020). Springer Cai et al. [2019] Cai, J., Zeng, H., Yong, H., Cao, Z., Zhang, L.: Toward real-world single image super-resolution: A new benchmark and a new model. In: ICCV, pp. 3086–3095 (2019) Wang et al. [2018] Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Cai, J., Zeng, H., Yong, H., Cao, Z., Zhang, L.: Toward real-world single image super-resolution: A new benchmark and a new model. In: ICCV, pp. 3086–3095 (2019) Wang et al. [2018] Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Cai, J., Zeng, H., Yong, H., Cao, Z., Zhang, L.: Toward real-world single image super-resolution: A new benchmark and a new model. In: ICCV, pp. 3086–3095 (2019) Wang et al. [2018] Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: ECCVW, pp. 0–0 (2018) Winkler et al. [2019] Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Winkler, C., Worrall, D., Hoogeboom, E., Welling, M.: Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042 (2019) Ledig et al. [2017] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017) Zamir et al. [2020] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV, pp. 492–511 (2020). Springer Wang et al. [2021] Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: ICCV, pp. 1905–1914 (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021) Liang et al. [2021] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021) Chen et al. [2023] Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR, pp. 22367–22377 (2023) Liu et al. [2018] Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) Wu et al. [2022] Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: CVPR, pp. 5901–5910 (2022) Ma et al. [2022] Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR, pp. 5637–5646 (2022) Dinh et al. [2014] Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Dinh, L., Krueger, D., Bengio, Y.: Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) Kingma and Dhariwal [2018] Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. NIPS 31 (2018) Jacobsen et al. [2018] Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Jacobsen, J.-H., Smeulders, A., Oyallon, E.: i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) Ardizzone et al. [2019] Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Ardizzone, L., Lüth, C., Kruse, J., Rother, C., Köthe, U.: Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392 (2019) Trippe and Turner [2018] Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Trippe, B.L., Turner, R.E.: Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908 (2018) Sun et al. [2019] Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Sun, H., Mehta, R., Zhou, H.H., Huang, Z., Johnson, S.C., Prabhakaran, V., Singh, V.: Dual-glow: Conditional flow-based generative model for modality transfer. In: ICCV, pp. 10611–10620 (2019) Pumarola et al. [2020] Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: Conditional generative flow models for images and 3d point clouds. In: CVPR, pp. 7949–7958 (2020) Atanov et al. [2019] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., Vetrov, D.: Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505 (2019) Zhao et al. [2021] Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Zhao, R., Liu, T., Xiao, J., Lun, D.P., Lam, K.-M.: Invertible image decolorization. TIP 30, 6081–6095 (2021) Wolf et al. [2021] Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Wolf, V., Lugmayr, A., Danelljan, M., Van Gool, L., Timofte, R.: Deflow: Learning complex image degradations from unpaired data with conditional flows. In: CVPR, pp. 94–103 (2021) Aakerberg et al. [2021] Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Aakerberg, A., Nasrollahi, K., Moeslund, T.B.: Rellisur: A real low-light image super-resolution dataset. In: NeurIPS (2021) Zhang et al. [2018] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018) Yuan et al. [2021] Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408 (2021) Agustsson and Timofte [2017] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017) Timofte et al. [2017] Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Timofte, R., Agustsson, E., Van Gool, L., Yang, M.-H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 114–125 (2017) Lv et al. [2019] Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019) Guo et al. [2019] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019) Brooks et al. [2019] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019) Chan and Whiteman [1983] Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Chan, L.C., Whiteman, P.: Hardware-constrained hybrid coding of video imagery. TAES (1), 71–84 (1983) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13(4), 600–612 (2004) Li et al. [2019] Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR, pp. 3867–3876 (2019) Haris et al. [2018] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: CVPR, pp. 1664–1673 (2018) Lim et al. [2017] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 136–144 (2017) Zhao et al. [2020] Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, pp. 56–72 (2020). Springer Zhou et al. [2023] Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Zhou, Y., Li, Z., Guo, C.-L., Bai, S., Cheng, M.-M., Hou, Q.: Srformer: Permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023) Han et al. [2020] Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020) Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)
- Han, H., Chung, S.-W., Kang, H.-G.: Mirnet: Learning multiple identities representations in overlapped speech. arXiv preprint arXiv:2008.01698 (2020)