Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

UIERL: Internal-External Representation Learning Network for Underwater Image Enhancement (2306.08344v1)

Published 14 Jun 2023 in cs.CV

Abstract: Underwater image enhancement (UIE) is a meaningful but challenging task, and many learning-based UIE methods have been proposed in recent years. Although much progress has been made, these methods still exist two issues: (1) There exists a significant region-wise quality difference in a single underwater image due to the underwater imaging process, especially in regions with different scene depths. However, existing methods neglect this internal characteristic of underwater images, resulting in inferior performance; (2) Due to the uniqueness of the acquisition approach, underwater image acquisition tools usually capture multiple images in the same or similar scenes. Thus, the underwater images to be enhanced in practical usage are highly correlated. However, when processing a single image, existing methods do not consider the rich external information provided by the related images. There is still room for improvement in their performance. Motivated by these two aspects, we propose a novel internal-external representation learning (UIERL) network to better perform UIE tasks with internal and external information, simultaneously. In the internal representation learning stage, a new depth-based region feature guidance network is designed, including a region segmentation based on scene depth to sense regions with different quality levels, followed by a region-wise space encoder module. With performing region-wise feature learning for regions with different quality separately, the network provides an effective guidance for global features and thus guides intra-image differentiated enhancement. In the external representation learning stage, we first propose an external information extraction network to mine the rich external information in the related images. Then, internal and external features interact with each other via the proposed external-assist-internal module and internal-assist-e

Definition Search Book Streamline Icon: https://streamlinehq.com
References (42)
  1. Y. Song, J. Li, X. Wang, and X. Chen, “Single image dehazing using ranking convolutional neural network,” IEEE Trans. on Multimedia, vol. 20, pp. 1548–1560, jun 2018.
  2. X. Yang, H. Li, Y.-L. Fan, and R. Chen, “Single image haze removal via region detection network,” IEEE Trans. on Multimedia, vol. 21, pp. 2545–2560, 2019.
  3. Z. Jin, M. Z. Iqbal, D. Bobkov, W. Zou, X. Li, and E. Steinbach, “A flexible deep CNN framework for image restoration,” IEEE Trans. on Multimedia, vol. 22, pp. 1055–1068, apr 2020.
  4. Y. Wang, D. Gong, J. Yang, Q. Shi, A. van denHengel, D. Xie, and B. Zeng, “Deep single image deraining via modeling haze-like effect,” IEEE Trans. on Multimedia, vol. 23, pp. 2481–2492, 2021.
  5. J. Li, K. A. Skinner, R. M. Eustice, and M. Johnson-Roberson, “Watergan: Unsupervised generative network to enable real-time color correction of monocular underwater images,” IEEE Robot. Autom. Lett., vol. 3, no. 1, pp. 387–394, 2017.
  6. X. Yu, Y. Qu, and M. Hong, “Underwater-gan: Underwater image restoration via conditional generative adversarial network,” in Proc. Int. Conf. Pattern Recognit., pp. 66–75, Springer, Dec. 2018.
  7. C. Li, C. Guo, W. Ren, R. Cong, J. Hou, S. Kwong, and D. Tao, “An underwater image enhancement benchmark dataset and beyond,” IEEE Trans. Image Process., vol. 29, pp. 4376–4389, Nov. 2019.
  8. X. Liu and B. M. Chen, “A systematic approach to synthesize underwater images benchmark dataset and beyond,” in Proc. Int. Conf. Control. Autom., jul 2019.
  9. C. Li, S. Anwar, and F. Porikli, “Underwater scene prior inspired deep underwater image and video enhancement,” Pattern Recognit., vol. 98, Feb. 2020.
  10. A. Dudhane, P. Hambarde, P. Patil, and S. Murala, “Deep underwater image restoration and beyond,” IEEE Signal Process. Lett., vol. 27, pp. 675–679, Apr. 2020.
  11. Z. Wang, L. Shen, Y. Mei, Y. Lin, and Q. Zhu, “Single underwater image enhancement using an analysis-synthesis network,” arXiv preprint arXiv:2108.09023, 2021.
  12. C. Fabbri, M. J. Islam, and J. Sattar, “Enhancing underwater imagery using generative adversarial networks,” in Proc.Int. Conf. Robot. Automat., pp. 7159–7165, May 2018.
  13. C. Li, J. Guo, and C. Guo, “Emerging from water: Underwater image color correction based on weakly supervised color transfer,” IEEE Signal Process. Lett., vol. 25, pp. 323–327, Jan. 2018.
  14. Y. Guo, H. Li, and P. Zhuang, “Underwater image enhancement using a multiscale dense generative adversarial network,” IEEE J. Ocean. Eng., vol. 45, no. 3, pp. 862–870, 2020.
  15. X. Liu, Z. Gao, and B. M. Chen, “MLFcGAN: Multilevel feature fusion-based conditional GAN for underwater image color correction,” IEEE Geosci. Remote Sens. Lett., vol. 17, pp. 1488–1492, sep 2020.
  16. Y. Lin, L. Shen, Z. Wang, K. Wang, and X. Zhang, “Attenuation coefficient guided two-stage network for underwater image restoration,” IEEE Signal Process. Lett., vol. 28, pp. 199–203, 2021.
  17. K. Wang, L. Shen, Y. Lin, M. Li, and Q. Zhao, “Joint iterative color correction and dehazing for underwater image enhancement,” IEEE Robot. Autom. Lett., vol. 6, no. 3, pp. 5121–5128, 2021.
  18. X. Xue, Z. Hao, L. Ma, Y. Wang, and R. Liu, “Joint luminance and chrominance learning for underwater image enhancement,” IEEE Signal Process. Lett., vol. 28, pp. 818–822, 2021.
  19. N. Jiang, W. Chen, Y. Lin, T. Zhao, and C.-W. Lin, “Underwater image enhancement with lightweight cascaded network,” IEEE Trans. on Multimedia, vol. 24, pp. 4301–4313, 2022.
  20. Z. Wang, L. Shen, M. Xu, M. Yu, K. Wang, and Y. Lin, “Domain adaptation for underwater image enhancement,” IEEE Trans. Image Process., vol. 32, pp. 1442–1457, 2023.
  21. Z. Jiang, Z. Li, S. Yang, X. Fan, and R. Liu, “Target oriented perceptual adversarial fusion network for underwater image enhancement,” IEEE Trans. Circuits Syst. Video Technol., vol. 32, no. 10, 2022.
  22. M. J. Islam, Y. Xia, and J. Sattar, “Fast underwater image enhancement for improved visual perception,” IEEE Robot. Autom. Lett., vol. 5, pp. 3227–3234, Feb. 2020.
  23. C. Li, S. Anwar, J. Hou, R. Cong, C. Guo, and W. Ren, “Underwater image enhancement via medium transmission-guided multi-color space embedding,” IEEE Trans. Image Process., vol. 30, pp. 4985–5000, 2021.
  24. D. Shi, L. Ma, R. Liu, X. Fan, and Z. Luo, “Semantic-driven context aggregation network for underwater image enhancement,” in Proc. Chin. Conf. Pattern Recognit. Comput. Vis., pp. 29–40, 2021.
  25. Z. Fu, W. Wang, Y. Huang, X. Ding, and K.-K. Ma, “Uncertainty inspired underwater image enhancement,” in Proc. Eur. Conf. Comput. Vis., pp. 465–482, 2022.
  26. K. Li, L. Wu, Q. Qi, W. Liu, X. Gao, L. Zhou, and D. Song, “Beyond single reference for training: Underwater image enhancement via comparative learning,” IEEE Trans. Circuits Syst. Video Technol., pp. 1–1, 2022.
  27. Q. Qi, K. Li, H. Zheng, X. Gao, G. Hou, and K. Sun, “SGUIE-net: Semantic attention guided underwater image enhancement with multi-scale perception,” IEEE Trans. Image Process., vol. 31, pp. 6816–6830, 2022.
  28. J. Y. Chiang and Y.-C. Chen, “Underwater image enhancement by wavelength compensation and dehazing,” IEEE Trans. Image Process., vol. 21, pp. 1756–1769, Dec. 2011.
  29. D. Akkaynak, T. Treibitz, T. Shlesinger, Y. Loya, R. Tamir, and D. Iluz, “What is the space of attenuation coefficients in underwater computer vision?,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 568–577, 2017.
  30. D. Akkaynak and T. Treibitz, “A revised underwater image formation model,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 6723–6732, 2018.
  31. D. Akkaynak and T. Treibitz, “Sea-thru: A method for removing water from underwater images,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 1682–1691, 2019.
  32. R. Liu, X. Fan, M. Zhu, M. Hou, and Z. Luo, “Real-world underwater enhancement: Challenges, benchmarks, and solutions under natural light,” IEEE Trans. Circuits Syst. Video Technol., vol. 30, no. 12, pp. 4861–4875, 2020.
  33. C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in Proc. Eur. Conf. Comput. Vis., pp. 184–199, 2014.
  34. J. Kim, J. K. Lee, and K. M. Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., jun 2016.
  35. B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced deep residual networks for single image super-resolution,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog. Worksh., jul 2017.
  36. W.-S. Lai, J.-B. Huang, N. Ahuja, and M.-H. Yang, “Deep laplacian pyramid networks for fast and accurate super-resolution,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., IEEE, jul 2017.
  37. H. Zhao, J. Jia, and V. Koltun, “Exploring self-attention for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., jun 2020.
  38. D. Li and Z. Wang, “Video superresolution via motion compensation and deep residual learning,” IEEE Trans. Comput. Imag., vol. 3, pp. 749–762, dec 2017.
  39. A. Kappeler, S. Yoo, Q. Dai, and A. K. Katsaggelos, “Video super-resolution with convolutional neural networks,” IEEE Trans. Comput. Imag., vol. 2, pp. 109–122, jun 2016.
  40. W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., jun 2016.
  41. A. Likas, N. Vlassis, and J. J. Verbeek, “The global k-means clustering algorithm,” Pattern Recognit., vol. 36, no. 2, pp. 451–461, 2003. Biometrics.
  42. D. Berman, D. Levy, S. Avidan, and T. Treibitz, “Underwater single image color restoration using haze-lines and a new quantitative dataset,” IEEE Trans. Pattern Anal. Mach. Intell., 2020.
Citations (3)

Summary

We haven't generated a summary for this paper yet.