Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PriorNet: A Novel Lightweight Network with Multidimensional Interactive Attention for Efficient Image Dehazing (2404.15638v1)

Published 24 Apr 2024 in cs.CV and cs.AI

Abstract: Hazy images degrade visual quality, and dehazing is a crucial prerequisite for subsequent processing tasks. Most current dehazing methods rely on neural networks and face challenges such as high computational parameter pressure and weak generalization capabilities. This paper introduces PriorNet--a novel, lightweight, and highly applicable dehazing network designed to significantly improve the clarity and visual quality of hazy images while avoiding excessive detail extraction issues. The core of PriorNet is the original Multi-Dimensional Interactive Attention (MIA) mechanism, which effectively captures a wide range of haze characteristics, substantially reducing the computational load and generalization difficulties associated with complex systems. By utilizing a uniform convolutional kernel size and incorporating skip connections, we have streamlined the feature extraction process. Simplifying the number of layers and architecture not only enhances dehazing efficiency but also facilitates easier deployment on edge devices. Extensive testing across multiple datasets has demonstrated PriorNet's exceptional performance in dehazing and clarity restoration, maintaining image detail and color fidelity in single-image dehazing tasks. Notably, with a model size of just 18Kb, PriorNet showcases superior dehazing generalization capabilities compared to other methods. Our research makes a significant contribution to advancing image dehazing technology, providing new perspectives and tools for the field and related domains, particularly emphasizing the importance of improving universality and deployability.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (40)
  1. R. T. Tan, “Visibility in bad weather from a single image,” in 2008 IEEE Conference on Computer Vision and Pattern Recognition, 2008, pp. 1–8.
  2. E. J. McCartney, “Optics of the atmosphere: scattering by molecules and particles,” New York, 1976.
  3. G. Meng, Y. Wang, J. Duan, S. Xiang, and C. Pan, “Efficient image dehazing with boundary constraint and contextual regularization,” in Proceedings of the IEEE international conference on computer vision, 2013, pp. 617–624.
  4. S. G. Narasimhan and S. K. Nayar, “Chromatic framework for vision in bad weather,” in Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No. PR00662), vol. 1.   IEEE, 2000, pp. 598–605.
  5. B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “Dehazenet: An end-to-end system for single image haze removal,” IEEE transactions on image processing, vol. 25, no. 11, pp. 5187–5198, 2016.
  6. H. Blasinski, J. Farrell, T. Lian, Z. Liu, and B. Wandell, “Optimizing image acquisition systems for autonomous driving,” Electronic Imaging, vol. 30, pp. 1–7, 2018.
  7. P. Li, J. Tian, Y. Tang, G. Wang, and C. Wu, “Deep retinex network for single image dehazing,” IEEE Transactions on Image Processing, vol. 30, pp. 1100–1115, 2020.
  8. W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M.-H. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14.   Springer, 2016, pp. 154–169.
  9. H. Zhang and V. M. Patel, “Densely connected pyramid dehazing network,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 3194–3203.
  10. S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather degraded images,” IEEE transactions on pattern analysis and machine intelligence, vol. 25, no. 6, pp. 713–724, 2003.
  11. X. Liu, Y. Ma, Z. Shi, and J. Chen, “Griddehazenet: Attention-based multi-scale network for image dehazing,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 7314–7323.
  12. B. Li, X. Peng, Z. Wang, J. Xu, and D. Feng, “Aod-net: All-in-one dehazing network,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 4770–4778.
  13. W. Wang, A. Wang, Q. Ai, C. Liu, and J. Liu, “Aagan: enhanced single image dehazing with attention-to-attention generative adversarial network,” IEEE Access, vol. 7, pp. 173 485–173 498, 2019.
  14. H. Dong, J. Pan, L. Xiang, Z. Hu, X. Zhang, F. Wang, and M.-H. Yang, “Multi-scale boosted dehazing network with dense feature fusion,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 2157–2167.
  15. T. Ye, M. Jiang, Y. Zhang, L. Chen, E. Chen, P. Chen, and Z. Lu, “Perceiving and modeling density is all you need for image dehazing,” arXiv preprint arXiv:2111.09733, 2021.
  16. H. Wu, J. Liu, Y. Xie, Y. Qu, and L. Ma, “Knowledge transfer dehazing network for nonhomogeneous dehazing,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, 2020, pp. 478–479.
  17. Z. Chen, Z. He, and Z.-M. Lu, “Dea-net: Single image dehazing based on detail-enhanced convolution and content-guided attention,” IEEE Transactions on Image Processing, 2024.
  18. X. Jiang, L. Lu, M. Zhu, Z. Hao, and W. Gao, “Haze relevant feature attention network for single image dehazing,” IEEE Access, vol. 9, pp. 106 476–106 488, 2021.
  19. Y. Liu and G. Zhao, “Pad-net: A perception-aided single image dehazing network,” arXiv preprint arXiv:1805.03146, 2018.
  20. Y. Qiu, K. Zhang, C. Wang, W. Luo, H. Li, and Z. Jin, “Mb-taylorformer: Multi-branch efficient transformer expanded by taylor formula for image dehazing,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 12 802–12 813.
  21. L. Lu, Q. Xiong, D. Chu, and B. Xu, “Mixdehazenet: Mix structure block for image dehazing network,” arXiv preprint arXiv:2305.17654, 2023.
  22. T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13.   Springer, 2014, pp. 740–755.
  23. A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4: Optimal speed and accuracy of object detection,” arXiv preprint arXiv:2004.10934, 2020.
  24. W. Wang, F. Chen, W. Liu, H. Cheng, and M. Wang, “Image dehazing network based on dual-domain feature fusion,” EasyChair, Tech. Rep., 2024.
  25. X. Qin, Z. Wang, Y. Bai, X. Xie, and H. Jia, “Ffa-net: Feature fusion attention network for single image dehazing,” in Proceedings of the AAAI conference on artificial intelligence, vol. 34, no. 07, 2020, pp. 11 908–11 915.
  26. J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132–7141.
  27. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
  28. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
  29. D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” arXiv preprint arXiv:1409.0473, 2014.
  30. X. Zhu, D. Cheng, Z. Zhang, S. Lin, and J. Dai, “An empirical study of spatial attention mechanisms in deep networks,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 6688–6697.
  31. D. Berman, S. Avidan et al., “Non-local image dehazing,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 1674–1682.
  32. C. Chen, M. N. Do, and J. Wang, “Robust image and video dehazing with visual artifact suppression via gradient residual minimization,” in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14.   Springer, 2016, pp. 576–591.
  33. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE transactions on pattern analysis and machine intelligence, vol. 33, no. 12, pp. 2341–2353, 2010.
  34. M. Sulami, I. Glatzer, R. Fattal, and M. Werman, “Automatic recovery of the atmospheric light in hazy images,” in 2014 IEEE International Conference on Computational Photography (ICCP).   IEEE, 2014, pp. 1–11.
  35. J. Liu, H. Yuan, Z. Yuan, L. Liu, B. Lu, and M. Yu, “Visual transformer with stable prior and patch-level attention for single image dehazing,” Neurocomputing, vol. 551, p. 126535, 2023.
  36. Y. Song, Z. He, H. Qian, and X. Du, “Vision transformers for single image dehazing,” IEEE Transactions on Image Processing, vol. 32, pp. 1927–1941, 2023.
  37. P. Morales, T. Klinghoffer, and S. Jae Lee, “Feature forwarding for efficient single image dehazing,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, 2019, pp. 0–0.
  38. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004.
  39. Z. Qu, J. Mei, L. Liu, and D.-Y. Zhou, “Crack detection of concrete pavement with cross-entropy loss function and improved vgg16 network model,” Ieee Access, vol. 8, pp. 54 564–54 573, 2020.
  40. Y. Liu, L. Zhu, S. Pei, H. Fu, J. Qin, Q. Zhang, L. Wan, and W. Feng, “From synthetic to real: Image dehazing collaborating with unlabeled real data,” in Proceedings of the 29th ACM international conference on multimedia, 2021, pp. 50–58.

Summary

We haven't generated a summary for this paper yet.