Papers
Topics
Authors
Recent
Search
2000 character limit reached

MuLA-GAN: Multi-Level Attention GAN for Enhanced Underwater Visibility

Published 25 Dec 2023 in cs.CV and eess.IV | (2312.15633v1)

Abstract: The underwater environment presents unique challenges, including color distortions, reduced contrast, and blurriness, hindering accurate analysis. In this work, we introduce MuLA-GAN, a novel approach that leverages the synergistic power of Generative Adversarial Networks (GANs) and Multi-Level Attention mechanisms for comprehensive underwater image enhancement. The integration of Multi-Level Attention within the GAN architecture significantly enhances the model's capacity to learn discriminative features crucial for precise image restoration. By selectively focusing on relevant spatial and multi-level features, our model excels in capturing and preserving intricate details in underwater imagery, essential for various applications. Extensive qualitative and quantitative analyses on diverse datasets, including UIEB test dataset, UIEB challenge dataset, U45, and UCCS dataset, highlight the superior performance of MuLA-GAN compared to existing state-of-the-art methods. Experimental evaluations on a specialized dataset tailored for bio-fouling and aquaculture applications demonstrate the model's robustness in challenging environmental conditions. On the UIEB test dataset, MuLA-GAN achieves exceptional PSNR (25.59) and SSIM (0.893) scores, surpassing Water-Net, the second-best model, with scores of 24.36 and 0.885, respectively. This work not only addresses a significant research gap in underwater image enhancement but also underscores the pivotal role of Multi-Level Attention in enhancing GANs, providing a novel and comprehensive framework for restoring underwater image quality.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (53)
  1. L. Bai, W. Zhang, X. Pan, and C. Zhao, “Underwater image enhancement based on global and local equalization of histogram and dual-image multi-scale fusion,” IEEE Access, vol. 8, pp. 128 973–128 990, 2020.
  2. M. Ahmed, A. B. Bakht, T. Hassan, W. Akram, A. Humais, L. Seneviratne, S. He, D. Lin, and I. Hussain, “Vision-based autonomous navigation for unmanned surface vessel in extreme marine conditions,” in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2023, pp. 7097–7103.
  3. Y. Guo, H. Li, and P. Zhuang, “Underwater image enhancement using a multiscale dense generative adversarial network,” IEEE Journal of Oceanic Engineering, vol. 45, no. 3, pp. 862–870, 2019.
  4. K. Z. M. Azmi, A. S. A. Ghani, Z. M. Yusof, and Z. Ibrahim, “Natural-based underwater image color enhancement through fusion of swarm-intelligence algorithm,” Applied Soft Computing, vol. 85, p. 105810, 2019.
  5. S. Raveendran, M. D. Patil, and G. K. Birajdar, “Underwater image enhancement: a comprehensive review, recent trends, challenges and applications,” Artificial Intelligence Review, vol. 54, pp. 5413–5467, 2021.
  6. S. Anwar and C. Li, “Diving deeper into underwater image enhancement: A survey,” Signal Processing: Image Communication, vol. 89, p. 115978, 2020.
  7. C. Fabbri, M. J. Islam, and J. Sattar, “Enhancing underwater imagery using generative adversarial networks,” in 2018 IEEE International Conference on Robotics and Automation (ICRA), 2018, pp. 7159–7165.
  8. J. Li, K. A. Skinner, R. M. Eustice, and M. Johnson-Roberson, “Watergan: Unsupervised generative network to enable real-time color correction of monocular underwater images,” IEEE Robotics and Automation Letters, vol. 3, no. 1, pp. 387–394, 2018.
  9. M. J. Islam, Y. Xia, and J. Sattar, “Fast underwater image enhancement for improved visual perception,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 3227–3234, 2020.
  10. C. Li, S. Anwar, and F. Porikli, “Underwater scene prior inspired deep underwater image and video enhancement,” Pattern Recognition, vol. 98, p. 107038, 2020.
  11. C. Li, C. Guo, W. Ren, R. Cong, J. Hou, S. Kwong, and D. Tao, “An underwater image enhancement benchmark dataset and beyond,” IEEE Transactions on Image Processing, vol. 29, pp. 4376–4389, 2019.
  12. R. Hummel, “Image enhancement by histogram transformation,” Unknown, 1975.
  13. A. M. Reza, “Realization of the contrast limited adaptive histogram equalization (clahe) for real-time image enhancement,” Journal of VLSI signal processing systems for signal, image and video technology, vol. 38, pp. 35–44, 2004.
  14. G. S. Karam, Z. M. Abood, and R. N. Saleh, “Enhancement of underwater image using fuzzy histogram equalization,” International Journal of Applied Information Systems, vol. 6, no. 6, pp. 1–6, 2013.
  15. F. Petit, A.-S. Capelle-Laizé, and P. Carré, “Underwater image enhancement by attenuation inversionwith quaternions,” in 2009 IEEE International Conference on Acoustics, Speech and Signal Processing.   IEEE, 2009, pp. 1177–1180.
  16. C. Ancuti, C. O. Ancuti, T. Haber, and P. Bekaert, “Enhancing underwater images and videos by fusion,” in 2012 IEEE conference on computer vision and pattern recognition.   IEEE, 2012, pp. 81–88.
  17. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE transactions on pattern analysis and machine intelligence, vol. 33, no. 12, pp. 2341–2353, 2010.
  18. P. L. Drews, E. R. Nascimento, S. S. Botelho, and M. F. M. Campos, “Underwater depth estimation and image restoration based on single images,” IEEE computer graphics and applications, vol. 36, no. 2, pp. 24–35, 2016.
  19. Y.-T. Peng, K. Cao, and P. C. Cosman, “Generalization of the dark channel prior for single image restoration,” IEEE Transactions on Image Processing, vol. 27, no. 6, pp. 2856–2868, 2018.
  20. H.-H. Chang, C.-Y. Cheng, and C.-C. Sung, “Single underwater image restoration based on depth estimation and transmission compensation,” IEEE Journal of Oceanic Engineering, vol. 44, no. 4, pp. 1130–1149, 2018.
  21. J. Perez, P. J. Sanz, M. Bryson, and S. B. Williams, “A benchmarking study on single image dehazing techniques for underwater autonomous vehicles,” in OCEANS 2017-Aberdeen.   IEEE, 2017, pp. 1–9.
  22. B. McGlamery, “Computer analysis and simulation of underwater camera system performance,” SIO ref, vol. 75, no. 2, 1975.
  23. ——, “A computer model for underwater camera systems,” in Ocean Optics VI, vol. 208.   SPIE, 1980, pp. 221–231.
  24. D. Akkaynak, T. Treibitz, T. Shlesinger, Y. Loya, R. Tamir, and D. Iluz, “What is the space of attenuation coefficients in underwater computer vision?” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 4931–4940.
  25. D. Akkaynak and T. Treibitz, “A revised underwater image formation model,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 6723–6732.
  26. ——, “Sea-thru: A method for removing water from underwater images,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 1682–1691.
  27. J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2223–2232.
  28. M. J. Islam, P. Luo, and J. Sattar, “Simultaneous enhancement and super-resolution of underwater imagery for improved visual perception,” arXiv preprint arXiv:2002.01155, 2020.
  29. Q. Qi, K. Li, H. Zheng, X. Gao, G. Hou, and K. Sun, “Sguie-net: Semantic attention guided underwater image enhancement with multi-scale perception,” IEEE Transactions on Image Processing, vol. 31, pp. 6816–6830, 2022.
  30. Z. Huang, J. Li, Z. Hua, and L. Fan, “Underwater image enhancement via adaptive group attention-based multiscale cascade transformer,” IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1–18, 2022.
  31. X. Cai, N. Jiang, W. Chen, J. Hu, and T. Zhao, “Cure-net: A cascaded deep network for underwater image enhancement,” IEEE Journal of Oceanic Engineering, 2023.
  32. Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu, “Image super-resolution using very deep residual channel attention networks,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 286–301.
  33. Y. Zhang, K. Li, K. Li, B. Zhong, and Y. Fu, “Residual non-local attention networks for image restoration,” arXiv preprint arXiv:1903.10082, 2019.
  34. S. W. Zamir, A. Arora, S. H. Khan, H. Munawar, F. S. Khan, M.-H. Yang, and L. Shao, “Learning enriched features for fast image restoration and enhancement,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
  35. J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 7132–7141.
  36. S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon, “Cbam: Convolutional block attention module,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 3–19.
  37. P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 5967–5976.
  38. Z. Yi, H. Zhang, P. Tan, and M. Gong, “Dualgan: Unsupervised dual learning for image-to-image translation,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2849–2857.
  39. J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14.   Springer, 2016, pp. 694–711.
  40. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  41. H. Li, J. Li, and W. Wang, “A fusion adversarial underwater image enhancement network with a public test dataset,” arXiv preprint arXiv:1906.06819, 2019.
  42. R. Liu, X. Fan, M. Zhu, M. Hou, and Z. Luo, “Real-world underwater enhancement: Challenges, benchmarks, and solutions under natural light,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 12, pp. 4861–4875, 2020.
  43. A. Horé and D. Ziou, “Image quality metrics: Psnr vs. ssim,” in 2010 20th International Conference on Pattern Recognition, 2010, pp. 2366–2369.
  44. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004.
  45. K. Panetta, C. Gao, and S. Agaian, “Human-visual-system-inspired underwater image quality measures,” IEEE Journal of Oceanic Engineering, vol. 41, no. 3, pp. 541–551, 2016.
  46. M. Yang and A. Sowmya, “An underwater color image quality evaluation metric,” IEEE Transactions on Image Processing, vol. 24, no. 12, pp. 6062–6071, 2015.
  47. A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind” image quality analyzer,” IEEE Signal Processing Letters, vol. 20, no. 3, pp. 209–212, 2013.
  48. C. O. Ancuti, C. Ancuti, C. De Vleeschouwer, and P. Bekaert, “Color balance and fusion for underwater image enhancement,” IEEE Transactions on Image Processing, vol. 27, no. 1, pp. 379–393, 2018.
  49. Y.-T. Peng and P. C. Cosman, “Underwater image restoration based on image blurriness and light absorption,” IEEE transactions on image processing, vol. 26, no. 4, pp. 1579–1594, 2017.
  50. L. Peng, C. Zhu, and L. Bian, “U-shape transformer for underwater image enhancement,” IEEE Transactions on Image Processing, 2023.
  51. D. Akkaynak, T. Treibitz, T. Shlesinger, Y. Loya, R. Tamir, and D. Iluz, “What is the space of attenuation coefficients in underwater computer vision?” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 568–577.
  52. W. Akram, T. Hassan, H. Toubar, M. Ahmed, N. Miškovic, L. Seneviratne, and I. Hussain, “Aquaculture defects recognition via multi-scale semantic segmentation,” Expert Systems with Applications, vol. 237, p. 121197, 2024.
  53. W. Akram, A. Casavola, N. Kapetanović, and N. Miškovic, “A visual servoing scheme for autonomous aquaculture net pens inspection using rov,” Sensors, vol. 22, no. 9, p. 3525, 2022.
Citations (5)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.