Emphasizing Crucial Features for Efficient Image Restoration (2405.11468v1)
Abstract: Image restoration is a challenging ill-posed problem which estimates latent sharp image from its degraded counterpart. Although the existing methods have achieved promising performance by designing novelty architecture of module, they ignore the fact that different regions in a corrupted image undergo varying degrees of degradation. In this paper, we propose an efficient and effective framework to adapt to varying degrees of degradation across different regions for image restoration. Specifically, we design a spatial and frequency attention mechanism (SFAM) to emphasize crucial features for restoration. SFAM consists of two modules: the spatial domain attention module (SDAM) and the frequency domain attention module (FDAM). The SFAM discerns the degradation location through spatial selective attention and channel selective attention in the spatial domain, while the FDAM enhances high-frequency signals to amplify the disparities between sharp and degraded image pairs in the spectral domain. Additionally, to capture global range information, we introduce a multi-scale block (MSBlock) that consists of three scale branches, each containing multiple simplified channel attention blocks (SCABlocks) and a multi-scale feed-forward block (MSFBlock). Finally, we propose our ECFNet, which integrates the aforementioned components into a U-shaped backbone for recovering high-quality images. Extensive experimental results demonstrate the effectiveness of ECFNet, outperforming state-of-the-art (SOTA) methods on both synthetic and real-world datasets.
- A. Karaali and C. R. Jung, “Edge-based defocus blur estimation with adaptive scale selection,” IEEE Transactions on Image Processing, vol. 27, no. 3, pp. 1126–1137, 2017.
- W. Dong, L. Zhang, G. Shi, and X. Wu, “Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization,” IEEE Transactions on Image Processing, vol. 20, no. 7, pp. 1838–1857, 2011.
- A. Abuolaim and M. S. Brown, “Defocus deblurring using dual-pixel data,” in European Conference on Computer Vision. Springer, 2020, pp. 111–126.
- B. Li, W. Ren, D. Fu, D. Tao, D. Feng, W. Zeng, and Z. Wang, “Benchmarking single-image dehazing and beyond,” IEEE Transactions on Image Processing, vol. 28, no. 1, pp. 492–505, 2018.
- Y. Cui, W. Ren, X. Cao, and A. Knoll, “Image restoration via frequency selection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 46, no. 2, pp. 1093–1108, 2024.
- Z. Chen, Z. He, and Z.-M. Lu, “Dea-net: Single image dehazing based on detail-enhanced convolution and content-guided attention,” IEEE Transactions on Image Processing, 2024.
- G. Kim and J. Kwon, “Self-parameter distillation dehazing,” IEEE Transactions on Image Processing, vol. 32, pp. 631–642, 2022.
- Y. Cui, W. Ren, X. Cao, and A. Knoll, “Focal network for image restoration,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 13 001–13 011.
- L. Chen, X. Chu, X. Zhang, and J. Sun, “Simple baselines for image restoration,” ECCV, 2022.
- J. Pan, D. Sun, J. Zhang, J. Tang, J. Yang, Y. W. Tai, and M. H. Yang, “Dual convolutional neural networks for low-level vision,” International Journal of Computer Vision, 2022.
- W. Wu, Y. Liu, and Z. Li, “Subband differentiated learning network for rain streak removal,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 33, no. 9, pp. 4675–4688, 2023.
- L. Kong, J. Dong, J. Ge, M. Li, and J. Pan, “Efficient frequency domain-based transformers for high-quality image deblurring,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 5886–5895.
- X. Feng, H. Ji, W. Pei, J. Li, G. Lu, and D. Zhang, “U2-former: Nested u-shaped transformer for image restoration via multi-view contrastive learning,” IEEE Transactions on Circuits and Systems for Video Technology, pp. 1–1, 2023.
- S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, and M.-H. Yang, “Restormer: Efficient transformer for high-resolution image restoration,” in CVPR, 2022.
- F.-J. Tsai, Y.-T. Peng, Y.-Y. Lin, C.-C. Tsai, and C.-W. Lin, “Stripformer: Strip transformer for fast image deblurring,” in ECCV, 2022.
- Z. Wang, X. Cun, J. Bao, W. Zhou, J. Liu, and H. Li, “Uformer: A general u-shaped transformer for image restoration,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2022, pp. 17 683–17 693.
- X. Hua, Z. Li, and H. Hong, “An efficient multiscale spatial rearrangement mlp architecture for image restoration,” IEEE transactions on image processing: a publication of the IEEE Signal Processing Society, vol. 33, pp. 423–438, 2024.
- Z. Tu, H. Talebi, H. Zhang, F. Yang, P. Milanfar, A. Bovik, and Y. Li, “Maxim: Multi-axis mlp for image processing,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 5769–5780.
- Y.-F. Liu, D.-W. Jaw, S.-C. Huang, and J.-N. Hwang, “Desnownet: Context-aware deep network for snow removal,” IEEE Transactions on Image Processing, vol. 27, no. 6, pp. 3064–3073, 2018.
- C. O. Ancuti, C. Ancuti, R. Timofte, and C. De Vleeschouwer, “O-haze: a dehazing benchmark with real hazy and haze-free outdoor images,” in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2018, pp. 754–762.
- X. Li, S. Zheng, and J. Jia, “Unnatural l0 sparse representation for natural image deblurring,” in IEEE Conference on Computer Vision and Pattern Recognition, 2013.
- W. Yang, R. T. Tan, S. Wang, Y. Fang, and J. Liu, “Single image deraining: From model-based to data-driven and beyond,” IEEE Transactions on pattern analysis and machine intelligence, vol. 43, no. 11, pp. 4059–4077, 2020.
- K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior.” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011.
- I. O. Tolstikhin, N. Houlsby, A. Kolesnikov, L. Beyer, X. Zhai, T. Unterthiner, J. Yung, A. Steiner, D. Keysers, J. Uszkoreit et al., “Mlp-mixer: An all-mlp architecture for vision,” Advances in neural information processing systems, vol. 34, pp. 24 261–24 272, 2021.
- S. Li, Y. Zhou, W. Ren, and W. Xiang, “Pfonet: A progressive feedback optimization network for lightweight single image dehazing,” IEEE Transactions on Image Processing, vol. 32, pp. 6558–6569, 2023.
- S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, M.-H. Yang, and L. Shao, “Multi-stage progressive image restoration,” in CVPR, 2021.
- J. Xiao, X. Fu, A. Liu, F. Wu, and Z.-J. Zha, “Image de-raining transformer,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 11, pp. 12 978–12 995, 2023.
- X. Zhou, H. Huang, Z. Wang, and R. He, “Ristra: Recursive image super-resolution transformer with relativistic assessment,” IEEE Transactions on Multimedia, pp. 1–12, 2024.
- X. Qin, Z. Wang, Y. Bai, X. Xie, and H. Jia, “Ffa-net: Feature fusion attention network for single image dehazing,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 07, 2020, pp. 11 908–11 915.
- Y. Cui, W. Ren, S. Yang, X. Cao, and A. Knoll, “Irnext: Rethinking convolutional network design for image restoration,” in Proceedings of the 40th International Conference on Machine Learning, 2023.
- K. Zhang, W. Luo, Y. Zhong, L. Ma, B. Stenger, W. Liu, and H. Li, “Deblurring by realistic blurring,” 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2734–2743, 2020.
- O. Kupyn, T. Martyniuk, J. Wu, and Z. Wang, “Deblurgan-v2: Deblurring (orders-of-magnitude) faster and better,” 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 8877–8886, 2019.
- S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, M.-H. Yang, and L. Shao, “Learning enriched features for fast image restoration and enhancement,” IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022.
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” arXiv, 2017.
- J. Dong, J. Pan, Z. Yang, and J. Tang, “Multi-scale residual low-pass filter network for image deblurring,” in 2023 IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 12 311–12 320.
- J. Liang, J. Cao, G. Sun, K. Zhang, L. Van Gool, and R. Timofte, “Swinir: Image restoration using swin transformer,” arXiv preprint arXiv:2108.10257, 2021.
- C. Li, C.-L. Guo, M. Zhou, Z. Liang, S. Zhou, R. Feng, and C. C. Loy, “Embedding fourier for ultra-high-definition low-light image enhancement,” in ICLR, 2023.
- H.-H. Yang and Y. Fu, “Wavelet u-net and the chromatic adaptation transform for single image dehazing,” in 2019 IEEE International Conference on Image Processing (ICIP), 2019, pp. 2736–2740.
- M. Xintian, L. Yiming, L. Fengze, L. Qingli, S. Wei, and W. Yan, “Intriguing findings of frequency selection for image deblurring,” in Proceedings of the 37th AAAI Conference on Artificial Intelligence, 2023.
- S. Ding, Q. Wang, L. Guo, X. Li, L. Ding, and X. Wu, “Wavelet and adaptive coordinate attention guided fine-grained residual network for image denoising,” IEEE Transactions on Circuits and Systems for Video Technology, 2024.
- J.-J. Huang and P. L. Dragotti, “WINNet: Wavelet-inspired invertible network for image denoising,” IEEE Transactions on Image Processing, vol. 31, pp. 4377–4392, 2022.
- W. Zou, L. Chen, Y. Wu, Y. Zhang, Y. Xu, and J. Shao, “Joint wavelet sub-bands guided network for single image super-resolution,” IEEE Transactions on Multimedia, vol. 25, pp. 4623–4637, 2023.
- Z. Sheng, X. Liu, S.-Y. Cao, H.-L. Shen, and H. Zhang, “Frequency-domain deep guided image denoising,” IEEE Transactions on Multimedia, vol. 25, pp. 6767–6781, 2023.
- Y. Cui, Y. Tao, Z. Bing, W. Ren, X. Gao, X. Cao, K. Huang, and A. Knoll, “Selective frequency network for image restoration,” in The Eleventh International Conference on Learning Representations, 2023.
- T. Gao, Y. Wen, K. Zhang, J. Zhang, T. Chen, L. Liu, and W. Luo, “Frequency-oriented efficient transformer for all-in-one weather-degraded image restoration,” IEEE Transactions on Circuits and Systems for Video Technology, pp. 1–1, 2023.
- X. Chu, L. Chen, and W. Yu, “Nafssr: Stereo image super-resolution using nafnet,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2022, pp. 1239–1248.
- C. O. Ancuti, C. Ancuti, and R. Timofte, “Nh-haze: An image dehazing benchmark with non-homogeneous hazy and haze-free images,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, 2020, pp. 444–445.
- C. O. Ancuti, C. Ancuti, M. Sbert, and R. Timofte, “Dense-haze: A benchmark for image dehazing with dense-haze and haze-free images,” in 2019 IEEE international conference on image processing (ICIP). IEEE, 2019, pp. 1014–1018.
- J. Zhang, Y. Cao, Z.-J. Zha, and D. Tao, “Nighttime dehazing with a synthetic benchmark,” in Proceedings of the 28th ACM international conference on multimedia, 2020, pp. 2355–2363.
- W.-T. Chen, H.-Y. Fang, J.-J. Ding, C.-C. Tsai, and S.-Y. Kuo, “Jstasr: Joint size and transparency-aware snow removal algorithm based on modified partial convolution and veiling effect removal,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXI 16. Springer, 2020, pp. 754–770.
- W.-T. Chen, H.-Y. Fang, C.-L. Hsieh, C.-C. Tsai, I. Chen, J.-J. Ding, S.-Y. Kuo et al., “All snow removed: Single image desnowing algorithm using hierarchical dual-tree complex wavelet representation and contradict channel loss,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 4196–4205.
- D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” Computer Science, 2014.
- I. Loshchilov and F. Hutter, “Sgdr: Stochastic gradient descent with warm restarts,” 2016.
- Y. Song, Z. He, H. Qian, and X. Du, “Vision transformers for single image dehazing,” IEEE Transactions on Image Processing, vol. 32, pp. 1927–1941, 2023.
- C.-L. Guo, Q. Yan, S. Anwar, R. Cong, W. Ren, and C. Li, “Image dehazing transformer with transmission-aware 3d position embedding,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 5812–5820.
- X. Liu, Y. Ma, Z. Shi, and J. Chen, “Griddehazenet: Attention-based multi-scale network for image dehazing,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 7314–7323.
- H. Dong, J. Pan, L. Xiang, Z. Hu, X. Zhang, F. Wang, and M.-H. Yang, “Multi-scale boosted dehazing network with dense feature fusion,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 2157–2167.
- X. Qin, Z. Wang, Y. Bai, X. Xie, and H. Jia, “Ffa-net: Feature fusion attention network for single image dehazing,” in Proceedings of the AAAI conference on artificial intelligence, vol. 34, no. 07, 2020, pp. 11 908–11 915.
- H. Wu, Y. Qu, S. Lin, J. Zhou, R. Qiao, Z. Zhang, Y. Xie, and L. Ma, “Contrastive learning for compact single image dehazing,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 10 551–10 560.
- T. Ye, Y. Zhang, M. Jiang, L. Chen, Y. Liu, S. Chen, and E. Chen, “Perceiving and modeling density for image dehazing,” in European conference on computer vision. Springer, 2022, pp. 130–145.
- J. Zhang, Y. Cao, S. Fang, Y. Kang, and C. Wen Chen, “Fast haze removal for nighttime image using maximum reflectance prior,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 7418–7426.
- T. Wang, G. Tao, W. Lu, K. Zhang, W. Luo, X. Zhang, and T. Lu, “Restoring vision in hazy weather with hierarchical contrastive learning,” Pattern Recognition, vol. 145, p. 109956, 2024.
- A. Karaali and C. R. Jung, “Edge-based defocus blur estimation with adaptive scale selection,” IEEE Transactions on Image Processing, vol. 27, no. 3, pp. 1126–1137, 2018.
- J. Lee, S. Lee, S. Cho, and S. Lee, “Deep defocus map estimation using domain adaptation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
- J. Shi, L. Xu, and J. Jia, “Just noticeable defocus blur detection and estimation,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 657–665.
- H. Son, J. Lee, S. Cho, and S. Lee, “Single image defocus deblurring using kernel-sharing parallel atrous convolutions,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 2642–2650.
- J. Lee, H. Son, J. Rim, S. Cho, and S. Lee, “Iterative filter adaptive network for single image defocus deblurring,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 2034–2042.
- S. Chen, T. Ye, Y. Liu, T. Liao, J. Jiang, E. Chen, and P. Chen, “Msp-former: Multi-scale projection transformer for single image desnowing,” in ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2023, pp. 1–5.
- D. Engin, A. Genç, and H. Kemal Ekenel, “Cycle-dehaze: Enhanced cyclegan for single image dehazing,” in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2018, pp. 825–833.
- R. Li, R. T. Tan, and L.-F. Cheong, “All in one bad weather removal using architectural search,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 3175–3185.
- J. M. J. Valanarasu, R. Yasarla, and V. M. Patel, “Transweather: Transformer-based restoration of images degraded by adverse weather conditions,” 2021.
- Hu Gao (15 papers)
- Bowen Ma (33 papers)
- Ying Zhang (389 papers)
- Jingfan Yang (7 papers)
- Jing Yang (320 papers)
- Depeng Dang (16 papers)