Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Multi-purpose Realistic Haze Benchmark with Quantifiable Haze Levels and Ground Truth (2206.06427v3)

Published 13 Jun 2022 in cs.CV

Abstract: Imagery collected from outdoor visual environments is often degraded due to the presence of dense smoke or haze. A key challenge for research in scene understanding in these degraded visual environments (DVE) is the lack of representative benchmark datasets. These datasets are required to evaluate state-of-the-art vision algorithms (e.g., detection and tracking) in degraded settings. In this paper, we address some of these limitations by introducing the first realistic hazy image benchmark, from both aerial and ground view, with paired haze-free images, and in-situ haze density measurements. This dataset was produced in a controlled environment with professional smoke generating machines that covered the entire scene, and consists of images captured from the perspective of both an unmanned aerial vehicle (UAV) and an unmanned ground vehicle (UGV). We also evaluate a set of representative state-of-the-art dehazing approaches as well as object detectors on the dataset. The full dataset presented in this paper, including the ground truth object classification bounding boxes and haze density measurements, is provided for the community to evaluate their algorithms at: https://a2i2-archangel.vision. A subset of this dataset has been used for the ``Object Detection in Haze'' Track of CVPR UG2 2022 challenge at http://cvpr2022.ug2challenge.org/track1.html.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (78)
  1. B. Li, W. Ren, D. Fu, D. Tao, D. Feng, W. Zeng, and Z. Wang, “Benchmarking single-image dehazing and beyond,” IEEE Transactions on Image Processing, vol. 28, no. 1, pp. 492–505, 2019.
  2. C. O. Ancuti, C. Ancuti, and R. Timofte, “NH-HAZE: an image dehazing benchmark with non-homogeneous hazy and haze-free images,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, ser. IEEE CVPR 2020, 2020.
  3. X. Zhang, H. Dong, J. Pan, C. Zhu, Y. Tai, C. Wang, J. Li, F. Huang, and F. Wang, “Learning to restore hazy video: A new real-world dataset and a new method,” in CVPR, 2021, pp. 9239–9248.
  4. J. Zhang, Y. Cao, Z.-J. Zha, and D. Tao, “Nighttime dehazing with a synthetic benchmark,” in Proceedings of the 28th ACM International Conference on Multimedia, 2020, pp. 2355–2363.
  5. Y. Zhang, L. Ding, and G. Sharma, “Hazerd: an outdoor scene dataset and benchmark for single image dehazing,” in Proc. IEEE Intl. Conf. Image Proc., 2017, pp. 3205–3209. [Online]. Available: http://www.ece.rochester.edu/~gsharma/papers/Zhang_ICIP2017_HazeRD.pdf,paperhttps://labsites.rochester.edu/gsharma/research/computer-vision/hazerd/,projectpageanddataset
  6. B. Xiao, Z. Zheng, X. Chen, C. Lv, Y. Zhuang, and T. Wang, “Single uhd image dehazing via interpretable pyramid network,” 2022. [Online]. Available: https://arxiv.org/abs/2202.08589
  7. W. Liu, X. Hou, J. Duan, and G. Qiu, “End-to-end single image fog removal using enhanced cycle consistent adversarial networks,” IEEE Transactions on Image Processing, vol. 29, pp. 7819–7833, 2020.
  8. S. Zhao, L. Zhang, S. Huang, Y. Shen, and S. Zhao, “Dehazing evaluation: Real-world benchmark datasets, criteria, and baselines,” IEEE Transactions on Image Processing, vol. 29, pp. 6947–6962, 2020.
  9. C. O. Ancuti, C. Ancuti, M. Sbert, and R. Timofte, “Dense-haze: A benchmark for image dehazing with dense-haze and haze-free images,” in 2019 IEEE international conference on image processing (ICIP).   IEEE, 2019, pp. 1014–1018.
  10. C. O. Ancuti, C. Ancuti, R. Timofte, and C. D. Vleeschouwer, “O-haze: a dehazing benchmark with real hazy and haze-free outdoor images,” in IEEE Conference on Computer Vision and Pattern Recognition, NTIRE Workshop, ser. NTIRE CVPR’18, 2018.
  11. ——, “I-haze: a dehazing benchmark with real hazy and haze-free indoor images,” in arXiv:1804.05091v1, 2018.
  12. E. J. McCartney, “Optics of the atmosphere: scattering by molecules and particles,” New York, 1976.
  13. S. K. Nayar and S. G. Narasimhan, “Vision in bad weather,” in Proceedings of the seventh IEEE international conference on computer vision, vol. 2.   IEEE, 1999, pp. 820–827.
  14. S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather degraded images,” IEEE transactions on pattern analysis and machine intelligence, vol. 25, no. 6, pp. 713–724, 2003.
  15. H. Peng and R. Rao, “Image enhancement of fog-impaired scenes with variable visibility,” in 2007 IEEE International Conference on Acoustics, Speech and Signal Processing-ICASSP’07, vol. 2.   IEEE, 2007, pp. II–389.
  16. R. Rao and S. Lee, “Algorithms for scene restoration and visibility estimation from aerosol scatter impaired images,” in IEEE International Conference on Image Processing 2005, vol. 1.   IEEE, 2005, pp. I–929.
  17. D. Berman, S. Avidan et al., “Non-local image dehazing,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 1674–1682.
  18. R. Fattal, “Single image dehazing,” ACM transactions on graphics (TOG), vol. 27, no. 3, pp. 1–9, 2008.
  19. ——, “Dehazing using color-lines,” ACM transactions on graphics (TOG), vol. 34, no. 1, pp. 1–14, 2014.
  20. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE transactions on pattern analysis and machine intelligence, vol. 33, no. 12, pp. 2341–2353, 2010.
  21. R. T. Tan, “Visibility in bad weather from a single image,” in 2008 IEEE conference on computer vision and pattern recognition.   IEEE, 2008, pp. 1–8.
  22. Q. Zhu, J. Mai, and L. Shao, “A fast single image haze removal algorithm using color attenuation prior,” IEEE transactions on image processing, vol. 24, no. 11, pp. 3522–3533, 2015.
  23. W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M.-H. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in European conference on computer vision.   Springer, 2016, pp. 154–169.
  24. B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “Dehazenet: An end-to-end system for single image haze removal,” IEEE Transactions on Image Processing, vol. 25, no. 11, pp. 5187–5198, 2016.
  25. H. Zhang and V. M. Patel, “Densely connected pyramid dehazing network,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 3194–3203.
  26. R. Li, J. Pan, Z. Li, and J. Tang, “Single image dehazing via conditional generative adversarial network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8202–8211.
  27. B. Li, X. Peng, Z. Wang, J. Xu, and D. Feng, “Aod-net: All-in-one dehazing network,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 4770–4778.
  28. ——, “End-to-end united video dehazing and detection,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1, 2018.
  29. W. Ren, L. Ma, J. Zhang, J. Pan, X. Cao, W. Liu, and M.-H. Yang, “Gated fusion network for single image dehazing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 3253–3261.
  30. Y. Qu, Y. Chen, J. Huang, and Y. Xie, “Enhanced pix2pix dehazing network,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 8160–8168.
  31. H. Dong, J. Pan, L. Xiang, Z. Hu, X. Zhang, F. Wang, and M.-H. Yang, “Multi-scale boosted dehazing network with dense feature fusion,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 2157–2167.
  32. Z. Zheng, W. Ren, X. Cao, X. Hu, T. Wang, F. Song, and X. Jia, “Ultra-high-definition image dehazing via multi-guided bilateral learning,” in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).   IEEE, 2021, pp. 16 180–16 189.
  33. P. Shyam, K.-J. Yoon, and K.-S. Kim, “Towards domain invariant single image dehazing,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 11, 2021, pp. 9657–9665.
  34. C.-H. Yeh, C.-H. Huang, and L.-W. Kang, “Multi-scale deep residual learning-based single image haze removal via image decomposition,” IEEE Transactions on Image Processing, vol. 29, pp. 3153–3167, 2019.
  35. Z. Deng, L. Zhu, X. Hu, C.-W. Fu, X. Xu, Q. Zhang, J. Qin, and P.-A. Heng, “Deep multi-model fusion for single-image dehazing,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 2453–2462.
  36. Y. Li, Q. Miao, W. Ouyang, Z. Ma, H. Fang, C. Dong, and Y. Quan, “Lap-net: Level-aware progressive network for image dehazing,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 3276–3285.
  37. R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580–587.
  38. R. Girshick, “Fast r-cnn,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1440–1448.
  39. S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” TPAMI, 2016.
  40. J. Dai, Y. Li, K. He, and J. Sun, “R-fcn: Object detection via region-based fully convolutional networks,” Advances in neural information processing systems, vol. 29, 2016.
  41. K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2961–2969.
  42. J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779–788.
  43. J. Redmon and A. Farhadi, “Yolo9000: better, faster, stronger,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 7263–7271.
  44. ——, “Yolov3: An incremental improvement,” arXiv preprint arXiv:1804.02767, 2018.
  45. A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4: Optimal speed and accuracy of object detection,” arXiv preprint arXiv:2004.10934, 2020.
  46. Z. Ge, S. Liu, F. Wang, Z. Li, and J. Sun, “Yolox: Exceeding yolo series in 2021,” arXiv preprint arXiv:2107.08430, 2021.
  47. W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “Ssd: Single shot multibox detector,” in European conference on computer vision.   Springer, 2016, pp. 21–37.
  48. T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2980–2988.
  49. X. Zhou, D. Wang, and P. Krähenbühl, “Objects as points,” arXiv preprint arXiv:1904.07850, 2019.
  50. Z. Tian, C. Shen, H. Chen, and T. He, “Fcos: Fully convolutional one-stage object detection,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 9627–9636.
  51. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  52. A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint arXiv:1704.04861, 2017.
  53. T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2117–2125.
  54. S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia, “Path aggregation network for instance segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 8759–8768.
  55. M. Tan, R. Pang, and Q. V. Le, “Efficientdet: Scalable and efficient object detection,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 10 781–10 790.
  56. G. Ghiasi, T.-Y. Lin, and Q. V. Le, “Nas-fpn: Learning scalable feature pyramid architecture for object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 7036–7045.
  57. N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko, “End-to-end object detection with transformers,” in European conference on computer vision.   Springer, 2020, pp. 213–229.
  58. Z. Ge, S. Liu, Z. Li, O. Yoshie, and J. Sun, “Ota: Optimal transport assignment for object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 303–312.
  59. A. J. Trevor, J. G. Rogers, and H. I. Christensen, “Omnimapper: A modular multimodal mapping framework,” in 2014 IEEE international conference on robotics and automation (ICRA).   IEEE, 2014, pp. 1983–1990.
  60. Z. Wu, K. Suresh, P. Narayanan, H. Xu, H. Kwon, and Z. Wang, “Delving into robust object detection from unmanned aerial vehicles: A deep nuisance disentanglement approach,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 1201–1210.
  61. J. Sun, Z. Shen, Y. Wang, H. Bao, and X. Zhou, “Loftr: Detector-free local feature matching with transformers,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 8922–8931.
  62. “Ros navigation stack: A 2d navigation stack that takes in information from odometry, sensor streams, and a goal pose and outputs safe velocity commands that are sent to a mobile base.” https://github.com/ros-planning/navigation.
  63. M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, A. Y. Ng et al., “Ros: an open-source robot operating system,” in ICRA workshop on open source software, vol. 3, no. 3.2.   Kobe, Japan, 2009, p. 5.
  64. “Image polygonal annotation with python,” https://github.com/wkentaro/labelme.
  65. ultralytics, “yolov5,” https://github.com/ultralytics/yolov5, 2022.
  66. Y. Cao, Z. He, L. Wang, W. Wang, Y. Yuan, D. Zhang, J. Zhang, P. Zhu, L. Van Gool, J. Han et al., “Visdrone-det2021: The vision meets drone object detection challenge results,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 2847–2854.
  67. D. Du, Y. Qi, H. Yu, Y. Yang, K. Duan, G. Li, W. Zhang, Q. Huang, and Q. Tian, “The unmanned aerial vehicle benchmark: Object detection and tracking,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 370–386.
  68. M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, “The cityscapes dataset for semantic urban scene understanding,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 3213–3223.
  69. D. Chen, M. He, Q. Fan, J. Liao, L. Zhang, D. Hou, L. Yuan, and G. Hua, “Gated context aggregation network for image dehazing and deraining,” in 2019 IEEE winter conference on applications of computer vision (WACV).   IEEE, 2019, pp. 1375–1383.
  70. X. Qin, Z. Wang, Y. Bai, X. Xie, and H. Jia, “Ffa-net: Feature fusion attention network for single image dehazing,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 07, 2020, pp. 11 908–11 915.
  71. T. Chen, J. Fu, W. Jiang, C. Gao, and S. Liu, “Srktdn: Applying super resolution method to dehazing task,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 487–496.
  72. M. Fu, H. Liu, Y. Yu, J. Chen, and K. Wang, “Dw-gan: A discrete wavelet transform gan for nonhomogeneous dehazing,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 203–212.
  73. J. Liu, H. Wu, Y. Xie, Y. Qu, and L. Ma, “Trident dehazing network,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020, pp. 430–431.
  74. D. Engin, A. Genç, and H. Kemal Ekenel, “Cycle-dehaze: Enhanced cyclegan for single image dehazing,” in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2018, pp. 825–833.
  75. C. You, G. Li, Y. Zhang, X. Zhang, H. Shan, M. Li, S. Ju, Z. Zhao, Z. Zhang, W. Cong et al., “Ct super-resolution gan constrained by the identical, residual, and cycle learning ensemble (gan-circle),” IEEE transactions on medical imaging, vol. 39, no. 1, pp. 188–203, 2019.
  76. J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2223–2232.
  77. J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in European conference on computer vision.   Springer, 2016, pp. 694–711.
  78. D. Liu, B. Wen, J. Jiao, X. Liu, Z. Wang, and T. S. Huang, “Connecting image denoising and high-level vision tasks via deep learning,” IEEE Transactions on Image Processing, vol. 29, pp. 3695–3706, 2020.
Citations (9)

Summary

We haven't generated a summary for this paper yet.