Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Joint Conditional Diffusion Model for Image Restoration with Mixed Degradations (2404.07770v1)

Published 11 Apr 2024 in cs.CV

Abstract: Image restoration is rather challenging in adverse weather conditions, especially when multiple degradations occur simultaneously. Blind image decomposition was proposed to tackle this issue, however, its effectiveness heavily relies on the accurate estimation of each component. Although diffusion-based models exhibit strong generative abilities in image restoration tasks, they may generate irrelevant contents when the degraded images are severely corrupted. To address these issues, we leverage physical constraints to guide the whole restoration process, where a mixed degradation model based on atmosphere scattering model is constructed. Then we formulate our Joint Conditional Diffusion Model (JCDM) by incorporating the degraded image and degradation mask to provide precise guidance. To achieve better color and detail recovery results, we further integrate a refinement network to reconstruct the restored image, where Uncertainty Estimation Block (UEB) is employed to enhance the features. Extensive experiments performed on both multi-weather and weather-specific datasets demonstrate the superiority of our method over state-of-the-art competing methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (51)
  1. S. Sun, W. Ren, T. Wang, and X. Cao, “Rethinking image restoration for object detection,” Advances in Neural Information Processing Systems, vol. 35, pp. 4461–4474, 2022.
  2. O. Özdenizci and R. Legenstein, “Restoring vision in adverse weather conditions with patch-based denoising diffusion models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
  3. J. Han, W. Li, P. Fang, C. Sun, J. Hong, M. A. Armin, L. Petersson, and H. Li, “Blind image decomposition,” in European Conference on Computer Vision.   Springer, 2022, pp. 218–237.
  4. D. Ren, W. Zuo, Q. Hu, P. Zhu, and D. Meng, “Progressive image deraining networks: A better and simpler baseline,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 3937–3946.
  5. Q. Guo, J. Sun, F. Juefei-Xu, L. MA, X. Xie, W. Feng, Y. Liu, and J. Zhao, “Efficientderain: Learning pixel-wise dilation filtering for high-efficiency single-image deraining,” in Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, 2021, pp. 2–9.
  6. H. Wang, Q. Xie, Q. Zhao, Y. Li, Y. Liang, Y. Zheng, and D. Meng, “Rcdnet: An interpretable rain convolutional dictionary network for single image deraining,” IEEE Transactions on Neural Networks and Learning Systems, 2023.
  7. X. Qin, Z. Wang, Y. Bai, X. Xie, and H. Jia, “Ffa-net: Feature fusion attention network for single image dehazing,” in Proceedings of the AAAI conference on artificial intelligence, vol. 34, 2020, pp. 11 908–11 915.
  8. C.-L. Guo, Q. Yan, S. Anwar, R. Cong, W. Ren, and C. Li, “Image dehazing transformer with transmission-aware 3d position embedding,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5812–5820.
  9. H. Yu, N. Zheng, M. Zhou, J. Huang, Z. Xiao, and F. Zhao, “Frequency and spatial dual guidance for image dehazing,” in European Conference on Computer Vision.   Springer, 2022, pp. 181–198.
  10. Y. Song, Z. He, H. Qian, and X. Du, “Vision transformers for single image dehazing,” IEEE Transactions on Image Processing, vol. 32, pp. 1927–1941, 2023.
  11. Y.-F. Liu, D.-W. Jaw, S.-C. Huang, and J.-N. Hwang, “Desnownet: Context-aware deep network for snow removal,” IEEE Transactions on Image Processing, vol. 27, no. 6, pp. 3064–3073, 2018.
  12. W.-T. Chen, H.-Y. Fang, C.-L. Hsieh, C.-C. Tsai, I. Chen, J.-J. Ding, S.-Y. Kuo et al., “All snow removed: Single image desnowing algorithm using hierarchical dual-tree complex wavelet representation and contradict channel loss,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 4196–4205.
  13. K. Zhang, R. Li, Y. Yu, W. Luo, and C. Li, “Deep dense multi-scale network for snow removal using semantic and depth priors,” IEEE Transactions on Image Processing, vol. 30, pp. 7419–7431, 2021.
  14. J. Lin, N. Jiang, Z. Zhang, W. Chen, and T. Zhao, “Lmqformer: A laplace-prior-guided mask query transformer for lightweight snow removal,” IEEE Transactions on Circuits and Systems for Video Technology, 2023.
  15. R. Qian, R. T. Tan, W. Yang, J. Su, and J. Liu, “Attentive generative adversarial network for raindrop removal from a single image,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 2482–2491.
  16. M.-W. Shao, L. Li, D.-Y. Meng, and W.-M. Zuo, “Uncertainty guided multi-scale attention network for raindrop removal from a single image,” IEEE Transactions on Image Processing, vol. 30, pp. 4828–4839, 2021.
  17. K. Jiang, Z. Wang, P. Yi, C. Chen, Z. Han, T. Lu, B. Huang, and J. Jiang, “Decomposition makes better rain removal: An improved attention-guided deraining network,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 10, pp. 3981–3995, 2020.
  18. R. Li, R. T. Tan, and L.-F. Cheong, “All in one bad weather removal using architectural search,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 3175–3185.
  19. J. M. J. Valanarasu, R. Yasarla, and V. M. Patel, “Transweather: Transformer-based restoration of images degraded by adverse weather conditions,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 2353–2363.
  20. Y. Cui, W. Ren, S. Yang, X. Cao, and A. Knoll, “Irnext: Rethinking convolutional network design for image restoration,” in International Conference on Machine Learning, 2023.
  21. T. Gao, Y. Wen, K. Zhang, J. Zhang, T. Chen, L. Liu, and W. Luo, “Frequency-oriented efficient transformer for all-in-one weather-degraded image restoration,” IEEE Transactions on Circuits and Systems for Video Technology, 2023.
  22. B. Li, X. Liu, P. Hu, Z. Wu, J. Lv, and X. Peng, “All-in-one image restoration for unknown corruption,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 17 452–17 462.
  23. J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic models,” Advances in neural information processing systems, vol. 33, pp. 6840–6851, 2020.
  24. F.-A. Croitoru, V. Hondru, R. T. Ionescu, and M. Shah, “Diffusion models in vision: A survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
  25. H. Yu, J. Huang, K. Zheng, M. Zhou, and F. Zhao, “High-quality image dehazing with diffusion model,” arXiv preprint arXiv:2308.11949, 2023.
  26. M. Wei, Y. Shen, Y. Wang, H. Xie, and F. L. Wang, “Raindiffusion: When unsupervised learning meets diffusion models for real-world image deraining,” arXiv preprint arXiv:2301.09430, 2023.
  27. K. Jiang, Z. Wang, P. Yi, C. Chen, B. Huang, Y. Luo, J. Ma, and J. Jiang, “Multi-scale progressive fusion network for single image deraining,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 8346–8355.
  28. K. Purohit, M. Suin, A. Rajagopalan, and V. N. Boddeti, “Spatially-adaptive image restoration using distortion-guided networks,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 2309–2319.
  29. W.-T. Chen, H.-Y. Fang, J.-J. Ding, C.-C. Tsai, and S.-Y. Kuo, “Jstasr: Joint size and transparency-aware snow removal algorithm based on modified partial convolution and veiling effect removal,” in European Conference on Computer Vision.   Springer, 2020, pp. 754–770.
  30. S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, M.-H. Yang, and L. Shao, “Multi-stage progressive image restoration,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 14 821–14 831.
  31. S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, and M.-H. Yang, “Restormer: Efficient transformer for high-resolution image restoration,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 5728–5739.
  32. Y. Cui, W. Ren, X. Cao, and A. Knoll, “Focal network for image restoration,” in Proceedings of the IEEE/CVF international conference on computer vision, 2023, pp. 13 001–13 011.
  33. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
  34. W.-T. Chen, Z.-K. Huang, C.-C. Tsai, H.-H. Yang, J.-J. Ding, and S.-Y. Kuo, “Learning multiple adverse weather removal via two-stage knowledge learning and multi-contrastive regularization: Toward a unified model,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 17 653–17 662.
  35. Y. Zhu, T. Wang, X. Fu, X. Yang, X. Guo, J. Dai, Y. Qiao, and X. Hu, “Learning weather-general and weather-specific features for image restoration under multiple adverse weather conditions,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 21 747–21 758.
  36. Z. Hao, S. You, Y. Li, K. Li, and F. Lu, “Learning from synthetic photorealistic raindrop for single image raindrop removal,” in Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, 2019.
  37. Y. Li, R. T. Tan, X. Guo, J. Lu, and M. S. Brown, “Rain streak removal using layer priors,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2736–2744.
  38. E. J. McCartney, “Optics of the atmosphere: scattering by molecules and particles,” New York, 1976.
  39. J. Song, C. Meng, and S. Ermon, “Denoising diffusion implicit models,” arXiv preprint arXiv:2010.02502, 2020.
  40. J. Sohl-Dickstein, E. Weiss, N. Maheswaranathan, and S. Ganguli, “Deep unsupervised learning using nonequilibrium thermodynamics,” in International conference on machine learning.   PMLR, 2015, pp. 2256–2265.
  41. R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, “High-resolution image synthesis with latent diffusion models,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 10 684–10 695.
  42. H. Chung, B. Sim, and J. C. Ye, “Come-closer-diffuse-faster: Accelerating conditional diffusion models for inverse problems through stochastic contraction,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 12 413–12 422.
  43. A. Kendall and Y. Gal, “What uncertainties do we need in bayesian deep learning for computer vision?” Advances in neural information processing systems, vol. 30, 2017.
  44. K. Zheng, J. Huang, M. Zhou, D. Hong, and F. Zhao, “Deep adaptive pansharpening via uncertainty-aware image fusion,” IEEE Transactions on Geoscience and Remote Sensing, 2023.
  45. M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, “The cityscapes dataset for semantic urban scene understanding,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 3213–3223.
  46. W. Yang, R. T. Tan, J. Feng, J. Liu, Z. Guo, and S. Yan, “Deep joint rain detection and removal from a single image,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1357–1366.
  47. J. F. Blinn, “A generalization of algebraic surface drawing,” ACM transactions on graphics (TOG), vol. 1, no. 3, pp. 235–256, 1982.
  48. C. O. Ancuti, C. Ancuti, M. Sbert, and R. Timofte, “Dense-haze: A benchmark for image dehazing with dense-haze and haze-free images,” in 2019 IEEE international conference on image processing.   IEEE, 2019, pp. 1014–1018.
  49. C. O. Ancuti, C. Ancuti, and R. Timofte, “Nh-haze: An image dehazing benchmark with non-homogeneous hazy and haze-free images,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, 2020, pp. 444–445.
  50. G. Zhang, W. Fang, Y. Zheng, and R. Wang, “Sdbad-net: A spatial dual-branch attention dehazing network based on meta-former paradigm,” IEEE Transactions on Circuits and Systems for Video Technology, 2023.
  51. D.-W. Jaw, S.-C. Huang, and S.-Y. Kuo, “Desnowgan: An efficient single image snow removal framework using cross-resolution lateral connection and gans,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 4, pp. 1342–1350, 2021.

Summary

We haven't generated a summary for this paper yet.