MvKSR: Multi-view Knowledge-guided Scene Recovery for Hazy and Rainy Degradation (2401.03800v2)
Abstract: High-quality imaging is crucial for ensuring safety supervision and intelligent deployment in fields like transportation and industry. It enables precise and detailed monitoring of operations, facilitating timely detection of potential hazards and efficient management. However, adverse weather conditions, such as atmospheric haziness and precipitation, can have a significant impact on image quality. When the atmosphere contains dense haze or water droplets, the incident light scatters, leading to degraded captured images. This degradation is evident in the form of image blur and reduced contrast, increasing the likelihood of incorrect assessments and interpretations by intelligent imaging systems (IIS). To address the challenge of restoring degraded images in hazy and rainy conditions, this paper proposes a novel multi-view knowledge-guided scene recovery network (termed MvKSR). Specifically, guided filtering is performed on the degraded image to separate high/low-frequency components. Subsequently, an en-decoder-based multi-view feature coarse extraction module (MCE) is used to coarsely extract features from different views of the degraded image. The multi-view feature fine fusion module (MFF) will learn and infer the restoration of degraded images through mixed supervision under different views. Additionally, we suggest an atrous residual block to handle global restoration and local repair in hazy/rainy/mixed scenes. Extensive experimental results demonstrate that MvKSR outperforms other state-of-the-art methods in terms of efficiency and stability for restoring degraded scenarios in IIS.
- R. Zhou, Y. Gao, Y. Wang, X. Xie, and X. Zhao, “A real-time scene parsing network for autonomous maritime transportation,” IEEE Trans. Instrum. Meas., vol. 72, pp. 1–14, Dec. 2022.
- J. Yang, C. Wang, B. Jiang, H. Song, and Q. Meng, “Visual perception enabled industry intelligence: State of the art, challenges and prospects,” IEEE Trans. Ind. Inf., vol. 17, no. 3, pp. 2204–2219, Mar. 2020.
- M. Cao, Z. Gao, B. Ramesh, T. Mei, and J. Cui, “A two-stage density-aware single image deraining method,” IEEE Trans. Image Process., vol. 30, pp. 6843–6854, Jul. 2021.
- Y. Gao, W. Xu, and Y. Lu, “Let you see in haze and sandstorm: Two-in-one low-visibility enhancement network,” IEEE Trans. Instrum. Meas., vol. 72, Aug. 2023.
- K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 12, pp. 2341–2353, Sep. 2010.
- C.-H. Yeh, L.-W. Kang, M.-S. Lee, and C.-Y. Lin, “Haze effect removal from image via haze density estimation in optical model,” Opt. Express, vol. 21, no. 22, pp. 27 127–27 141, Nov. 2013.
- S. E. Kim, T. H. Park, and I. K. Eom, “Fast single image dehazing using saturation based transmission map estimation,” IEEE Trans. Image Process., vol. 29, pp. 1985–1998, Oct. 2019.
- J. Liu, R. W. Liu, J. Sun, and T. Zeng, “Rank-one prior: Real-time scene recovery,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 7, pp. 8845–8860, Dec. 2022.
- L.-W. Kang, C.-W. Lin, and Y.-H. Fu, “Automatic single-image-based rain streaks removal via image decomposition,” IEEE Trans. Image Process., vol. 21, no. 4, pp. 1742–1755, Dec. 2011.
- Y.-L. Chen and C.-T. Hsu, “A generalized low-rank appearance model for spatio-temporally correlated rain streaks,” in Proc. IEEE ICCV, 2013, pp. 1968–1975.
- Y. Li, R. T. Tan, X. Guo, J. Lu, and M. S. Brown, “Rain streak removal using layer priors,” in Proc. IEEE ICCV, 2016, pp. 2736–2744.
- Q. Luo, H. He, K. Liu, C. Yang, O. Silven, and L. Liu, “Rain-like layer removal from hot-rolled steel strip based on attentive dual residual generative adversarial network,” IEEE Trans. Instrum. Meas., vol. 72, Apr. 2023.
- Y. Guo, Y. Gao, W. Liu, Y. Lu, J. Qu, S. He, and W. Ren, “Scanet: Self-paced semi-curricular attention network for non-homogeneous image dehazing,” in Proc. IEEE CVPRW, 2023, pp. 1884–1893.
- J. Qu, Y. Gao, Y. Lu, W. Xu, and R. W. Liu, “Deep learning-driven surveillance quality enhancement for maritime management promotion under low-visibility weathers,” Ocean Coastal Manage., vol. 235, p. 106478, Mar. 2023.
- X. Yin, G. Tu, and Q. Chen, “Multiscale depth fusion with contextual hybrid enhancement network for image dehazing,” IEEE Trans. Instrum. Meas., vol. 72, Sep. 2023.
- Y. Song, Z. He, H. Qian, and X. Du, “Vision transformers for single image dehazing,” IEEE Trans. Image Process., vol. 32, pp. 1927–1941, Mar. 2023.
- Y. Liang, S. Anwar, and Y. Liu, “Drt: A lightweight single image deraining recursive transformer,” in Proc. IEEE CVPR, 2022, pp. 589–598.
- Y. Dong, Y. Liu, H. Zhang, S. Chen, and Y. Qiao, “Fd-gan: Generative adversarial networks with fusion-discriminator for single image dehazing,” in Proc. AAAI, vol. 34, no. 07, 2020, pp. 10 729–10 736.
- H. Zhang, V. Sindagi, and V. M. Patel, “Image de-raining using a conditional generative adversarial network,” IEEE Trans. Circuits Syst. Video Technol., vol. 30, no. 11, pp. 3943–3956, Jun. 2019.
- K. Jiang, Z. Wang, P. Yi, C. Chen, B. Huang, Y. Luo, J. Ma, and J. Jiang, “Multi-scale progressive fusion network for single image deraining,” in Proc. IEEE ICCV, 2020, pp. 8346–8355.
- Z. Li, H. Shu, and C. Zheng, “Multi-scale single image dehazing using laplacian and gaussian pyramids,” IEEE Trans. Image Process., vol. 30, pp. 9270–9279, Nov. 2021.
- L. Shen, Z. Yue, Q. Chen, F. Feng, and J. Ma, “Deep joint rain and haze removal from a single image,” in Proc. IEEE ICPR, 2018, pp. 2821–2826.
- Y. Wang, D. Gong, J. Yang, Q. Shi, D. Xie, B. Zeng et al., “Deep single image deraining via modeling haze-like effect,” IEEE Trans. Multimedia, vol. 23, pp. 2481–2492, Aug. 2020.
- X. Hu, L. Zhu, T. Wang, C.-W. Fu, and P.-A. Heng, “Single-image real-time rain removal based on depth-guided non-local features,” IEEE Trans. Image Process., vol. 30, pp. 1759–1770, Jan. 2021.
- K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 6, pp. 1397–1409, Oct. 2012.
- K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE ICCV, 2016, pp. 770–778.
- H. Wu, Y. Qu, S. Lin, J. Zhou, R. Qiao, Z. Zhang, Y. Xie, and L. Ma, “Contrastive learning for compact single image dehazing,” in Proc. IEEE CVPR, 2021, pp. 10 551–10 560.
- W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M.-H. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in Proc. ECCV, 2016, pp. 154–169.
- X. Qin, Z. Wang, Y. Bai, X. Xie, and H. Jia, “Ffa-net: Feature fusion attention network for single image dehazing,” in Proc. AAAI, vol. 34, no. 07, 2020, pp. 11 908–11 915.
- S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, M.-H. Yang, and L. Shao, “Learning enriched features for real image restoration and enhancement,” in Proc. ECCV. Springer, 2020, pp. 492–511.
- B. Li, X. Liu, P. Hu, Z. Wu, J. Lv, and X. Peng, “All-in-one image restoration for unknown corruption,” in Proc. IEEE CVPR, 2022, pp. 17 452–17 462.
- J. M. J. Valanarasu, R. Yasarla, and V. M. Patel, “Transweather: Transformer-based restoration of images degraded by adverse weather conditions,” in Proc. IEEE CVPR, 2022, pp. 2353–2363.
- O. Özdenizci and R. Legenstein, “Restoring vision in adverse weather conditions with patch-based denoising diffusion models,” IEEE Trans. Pattern Anal. Mach. Intell., Jan. 2023.
- Y. Zhu, T. Wang, X. Fu, X. Yang, X. Guo, J. Dai, Y. Qiao, and X. Hu, “Learning weather-general and weather-specific features for image restoration under multiple adverse weather conditions,” in Proc. IEEE CVPR, 2023, pp. 21 747–21 758.
- X. Fu, J. Huang, D. Zeng, Y. Huang, X. Ding, and J. Paisley, “Removing rain from single images via a deep detail network,” in Proc. IEEE CVPR, 2017, pp. 3855–3863.
- H. Zhang and V. M. Patel, “Density-aware single image de-raining using a multi-stream dense network,” in Proc. IEEE CVPR, 2018, pp. 695–704.
- X. Fu, B. Liang, Y. Huang, X. Ding, and J. Paisley, “Lightweight pyramid networks for image deraining,” IEEE Trans. Neur. Net. Lear., vol. 31, no. 6, pp. 1794–1807, Jul. 2019.
- W. Ran, Y. Yang, and H. Lu, “Single image rain removal boosting via directional gradient,” in Proc. IEEE ICME, 2020, pp. 1–6.
- X. Fu, Q. Qi, Z.-J. Zha, Y. Zhu, and X. Ding, “Rain streak removal via dual graph convolutional network,” in Proc. AAAI, vol. 35, no. 2, 2021, pp. 1352–1360.
- B. Li, W. Ren, D. Fu, D. Tao, D. Feng, W. Zeng, and Z. Wang, “Benchmarking single-image dehazing and beyond,” IEEE Trans. Image Process., vol. 28, no. 1, pp. 492–505, Aug. 2018.
- D. K. Prasad, D. Rajan, L. Rachmawati, E. Rajabally, and C. Quek, “Video processing from electro-optical sensors for object detection and tracking in a maritime environment: A survey,” IEEE Trans. Intell. Transp. Syst., vol. 18, no. 8, pp. 1993–2016, Jan. 2017.
- W. Yang, R. T. Tan, J. Feng, Z. Guo, S. Yan, and J. Liu, “Joint rain detection and removal from a single image with contextualized deep networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 6, pp. 1377–1393, Jan. 2019.
- Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, Apr. 2004.
- L. Zhang, L. Zhang, X. Mou, and D. Zhang, “Fsim: A feature similarity index for image quality assessment,” IEEE Trans. Image Process., vol. 20, no. 8, pp. 2378–2386, Jan. 2011.
- L. Zhang, Y. Shen, and H. Li, “Vsi: A visual saliency-induced index for perceptual image quality assessment,” IEEE Trans. Image Process., vol. 23, no. 10, pp. 4270–4281, Aug. 2014.
- K. He and J. Sun, “Fast guided filter,” arXiv preprint arXiv:1505.00996, May 2015.