Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 83 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 21 tok/s Pro
GPT-5 High 20 tok/s Pro
GPT-4o 103 tok/s Pro
Kimi K2 205 tok/s Pro
GPT OSS 120B 456 tok/s Pro
Claude Sonnet 4 35 tok/s Pro
2000 character limit reached

Multiple Latent Space Mapping for Compressed Dark Image Enhancement (2403.07622v1)

Published 12 Mar 2024 in cs.CV, cs.AI, and eess.IV

Abstract: Dark image enhancement aims at converting dark images to normal-light images. Existing dark image enhancement methods take uncompressed dark images as inputs and achieve great performance. However, in practice, dark images are often compressed before storage or transmission over the Internet. Current methods get poor performance when processing compressed dark images. Artifacts hidden in the dark regions are amplified by current methods, which results in uncomfortable visual effects for observers. Based on this observation, this study aims at enhancing compressed dark images while avoiding compression artifacts amplification. Since texture details intertwine with compression artifacts in compressed dark images, detail enhancement and blocking artifacts suppression contradict each other in image space. Therefore, we handle the task in latent space. To this end, we propose a novel latent mapping network based on variational auto-encoder (VAE). Firstly, different from previous VAE-based methods with single-resolution features only, we exploit multiple latent spaces with multi-resolution features, to reduce the detail blur and improve image fidelity. Specifically, we train two multi-level VAEs to project compressed dark images and normal-light images into their latent spaces respectively. Secondly, we leverage a latent mapping network to transform features from compressed dark space to normal-light space. Specifically, since the degradation models of darkness and compression are different from each other, the latent mapping process is divided mapping into enlightening branch and deblocking branch. Comprehensive experiments demonstrate that the proposed method achieves state-of-the-art performance in compressed dark image enhancement.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (56)
  1. X. Guo, Y. Li, and H. Ling, “Lime: Low-light image enhancement via illumination map estimation,” IEEE Transactions on Image Processing, vol. 26, no. 2, pp. 982–993, 2016.
  2. P. Zhuang, C. Li, and J. Wu, “Bayesian retinex underwater image enhancement,” Engineering Applications of Artificial Intelligence, vol. 101, p. 104171, 2021.
  3. C. Wei, W. Wang, W. Yang, and J. Liu, “Deep retinex decomposition for low-light enhancement,” in Proceedings of British Machine Vision Conference, 2018.
  4. Y. Zhang, J. Zhang, and X. Guo, “Kindling the darkness: A practical low-light image enhancer,” in Proceedings of the 27th ACM international conference on multimedia, 2019, pp. 1632–1640.
  5. Y. Jiang, X. Gong, D. Liu, Y. Cheng, C. Fang, X. Shen, J. Yang, P. Zhou, and Z. Wang, “Enlightengan: Deep light enhancement without paired supervision,” IEEE Transactions on Image Processing, vol. 30, pp. 2340–2349, 2021.
  6. C. Guo, C. Li, J. Guo, C. C. Loy, J. Hou, S. Kwong, and R. Cong, “Zero-reference deep curve estimation for low-light image enhancement,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 1780–1789.
  7. J. Li, Y. Wang, H. Xie, and K.-K. Ma, “Learning a single model with a wide range of quality factors for jpeg image artifacts removal,” IEEE Transactions on Image Processing, vol. 29, pp. 8842–8854, 2020.
  8. C. Dong, Y. Deng, C. C. Loy, and X. Tang, “Compression artifacts reduction by a deep convolutional network,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 576–584.
  9. X. Fu, M. Wang, X. Cao, X. Ding, and Z.-J. Zha, “A model-driven deep unfolding method for jpeg artifacts removal,” IEEE Transactions on Neural Networks and Learning Systems, 2021.
  10. M.-H. Lin, C.-H. Yeh, C.-H. Lin, C.-H. Huang, and L.-W. Kang, “Deep multi-scale residual learning-based blocking artifacts reduction for compressed images,” in Proceedings of IEEE International Conference on Artificial Intelligence Circuits and Systems.   IEEE, 2019, pp. 18–19.
  11. S. Gopalakrishnan, P. R. Singh, Y. Yazici, C.-S. Foo, V. Chandrasekhar, and A. Ambikapathi, “Classify and generate: Using classification latent space representations for image generations,” Neurocomputing, vol. 471, pp. 296–334, 2022.
  12. Z. Wan, B. Zhang, D. Chen, P. Zhang, D. Chen, J. Liao, and F. Wen, “Bringing old photos back to life,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 2747–2757.
  13. H. Ibrahim and N. S. P. Kong, “Brightness preserving dynamic histogram equalization for image contrast enhancement,” IEEE Transactions on Consumer Electronics, vol. 53, no. 4, pp. 1752–1758, 2007.
  14. M. Abdullah-Al-Wadud, M. H. Kabir, M. A. A. Dewan, and O. Chae, “A dynamic histogram equalization for image contrast enhancement,” IEEE Transactions on Consumer Electronics, vol. 53, no. 2, pp. 593–600, 2007.
  15. Q. Wang and R. K. Ward, “Fast image/video contrast enhancement based on weighted thresholded histogram equalization,” IEEE Transactions on Consumer Electronics, vol. 53, no. 2, pp. 757–764, 2007.
  16. D. J. Jobson, Z.-u. Rahman, and G. A. Woodell, “Properties and performance of a center/surround retinex,” IEEE Transactions on Image Processing, vol. 6, no. 3, pp. 451–462, 1997.
  17. Q. Zhang, G. Yuan, C. Xiao, L. Zhu, and W.-S. Zheng, “High-quality exposure correction of underexposed photos,” in Proceedings of the 26th ACM international conference on Multimedia, 2018, pp. 582–590.
  18. X. Fu, D. Zeng, Y. Huang, X.-P. Zhang, and X. Ding, “A weighted variational model for simultaneous reflectance and illumination estimation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2782–2790.
  19. M. Li, J. Liu, W. Yang, X. Sun, and Z. Guo, “Structure-revealing low-light image enhancement via robust retinex model,” IEEE Transactions on Image Processing, vol. 27, no. 6, pp. 2828–2841, 2018.
  20. X. Ren, W. Yang, W.-H. Cheng, and J. Liu, “Lr3m: Robust low-light enhancement via low-rank regularized retinex model,” IEEE Transactions on Image Processing, vol. 29, pp. 5862–5876, 2020.
  21. M. Gharbi, J. Chen, J. T. Barron, S. W. Hasinoff, and F. Durand, “Deep bilateral learning for real-time image enhancement,” ACM Transactions on Graphics (TOG), vol. 36, no. 4, pp. 1–12, 2017.
  22. W. Ren, S. Liu, L. Ma, Q. Xu, X. Xu, X. Cao, J. Du, and M.-H. Yang, “Low-light image enhancement via a deep hybrid network,” IEEE Transactions on Image Processing, vol. 28, no. 9, pp. 4364–4375, 2019.
  23. Y. Hu, H. He, C. Xu, B. Wang, and S. Lin, “Exposure: A white-box photo post-processing framework,” ACM Transactions on Graphics, vol. 37, no. 2, pp. 1–17, 2018.
  24. R. Wang, Q. Zhang, C.-W. Fu, X. Shen, W.-S. Zheng, and J. Jia, “Underexposed photo enhancement using deep illumination estimation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 6849–6857.
  25. R. Liu, L. Ma, J. Zhang, X. Fan, and Z. Luo, “Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 10 561–10 570.
  26. W. Yang, S. Wang, Y. Fang, Y. Wang, and J. Liu, “From fidelity to perceptual quality: A semi-supervised approach for low-light image enhancement,” in the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 3063–3072.
  27. K. Xu, X. Yang, B. Yin, and R. W. Lau, “Learning to restore low-light images via decomposition-and-enhancement,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 2281–2290.
  28. J. Li, X. Feng, and Z. Hua, “Low-light image enhancement via progressive-recursive network,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 11, pp. 4227–4240, 2021.
  29. Z. Xia, M. Gharbi, F. Perazzi, K. Sunkavalli, and A. Chakrabarti, “Deep denoising of flash and no-flash pairs for photography in low-light environments,” in the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 2063–2072.
  30. J. Cai, S. Gu, and L. Zhang, “Learning a deep single image contrast enhancer from multi-exposure images,” IEEE Transactions on Image Processing, vol. 27, no. 4, pp. 2049–2062, 2018.
  31. B. Moseley, V. Bickel, I. G. López-Francos, and L. Rana, “Extreme low-light environment-driven image denoising over permanently shadowed lunar regions with a physical noise model,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 6317–6327.
  32. Y.-S. Chen, Y.-C. Wang, M.-H. Kao, and Y.-Y. Chuang, “Deep photo enhancer: Unpaired learning for image enhancement from photographs with gans,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 6306–6314.
  33. J. Wang, W. Tan, X. Niu, and B. Yan, “Rdgan: Retinex decomposition based adversarial learning for low-light enhancement,” in Proceedings of IEEE International Conference on Multimedia and Expo.   IEEE, 2019, pp. 1186–1191.
  34. K. Xu, H. Chen, C. Xu, Y. Jin, and C. Zhu, “Structure-texture aware network for low-light image enhancement,” IEEE Transactions on Circuits and Systems for Video Technology, 2022.
  35. L. Ma, T. Ma, R. Liu, X. Fan, and Z. Luo, “Toward fast, flexible, and robust low-light image enhancement,” arXiv preprint arXiv:2204.10137, 2022.
  36. G. K. Wallace, “The jpeg still picture compression standard,” IEEE transactions on consumer electronics, vol. 38, no. 1, pp. xviii–xxxiv, 1992.
  37. A. Skodras, C. Christopoulos, and T. Ebrahimi, “The jpeg 2000 still image compression standard,” IEEE Signal Processing Magazine, vol. 18, no. 5, pp. 36–58, 2001.
  38. T. Wiegand, G. J. Sullivan, G. Bjontegaard, and A. Luthra, “Overview of the h. 264/avc video coding standard,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 13, no. 7, pp. 560–576, 2003.
  39. C. Zhao, J. Zhang, S. Ma, X. Fan, Y. Zhang, and W. Gao, “Reducing image compression artifacts by structural sparse representation and quantization constraint prior,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 27, no. 10, pp. 2057–2071, 2016.
  40. Y. Kim, J. W. Soh, J. Park, B. Ahn, H.-S. Lee, Y.-S. Moon, and N. I. Cho, “A pseudo-blind convolutional neural network for the reduction of compression artifacts,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 4, pp. 1121–1135, 2019.
  41. Z. Jin, M. Z. Iqbal, W. Zou, X. Li, and E. Steinbach, “Dual-stream multi-path recursive residual network for jpeg image compression artifacts reduction,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 2, pp. 467–479, 2020.
  42. X. Fu, X. Wang, A. Liu, J. Han, and Z.-J. Zha, “Learning dual priors for jpeg compression artifacts removal,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 4086–4095.
  43. D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv preprint arXiv:1312.6114, 2013.
  44. J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in Proceedings of European Conference on Computer Vision.   Springer, 2016, pp. 694–711.
  45. X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. Paul Smolley, “Least squares generative adversarial networks,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 2794–2802.
  46. L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 4, pp. 834–848, 2017.
  47. B. C. Russell, A. Torralba, K. P. Murphy, and W. T. Freeman, “Labelme: a database and web-based tool for image annotation,” International Journal of Computer Vision, vol. 77, no. 1, pp. 157–173, 2008.
  48. J. Hai, Z. Xuan, R. Yang, Y. Hao, F. Zou, F. Lin, and S. Han, “R2rnet: Low-light image enhancement via real-low to real-normal network,” arXiv preprint arXiv:2106.14501, 2021.
  49. A. Aakerberg, K. Nasrollahi, and T. B. Moeslund, “RELLISUR: A real low-light image super-resolution dataset,” in Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2021.
  50. H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-up robust features (surf),” Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346–359, 2008.
  51. Z. Zhao, B. Xiong, L. Wang, Q. Ou, L. Yu, and F. Kuang, “Retinexdip: a unified deep framework for low-light image enhancement,” IEEE Transactions on Circuits and Systems for Video Technology, 2021.
  52. Y. Wang, R. Wan, W. Yang, H. Li, L.-P. Chau, and A. C. Kot, “Low-light image enhancement with normalizing flow,” in AAAI Conference on Artificial Intelligence, 2022.
  53. B. Lim, S. Son, H. Kim, S. Nah, and K. Mu Lee, “Enhanced deep residual networks for single image super-resolution,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition Workshops, 2017, pp. 136–144.
  54. C. Yim and A. C. Bovik, “Quality assessment of deblocked images,” IEEE Transactions on Image Processing, vol. 20, no. 1, pp. 88–98, 2010.
  55. X. Tang, D. K. Du, Z. He, and J. Liu, “Pyramidbox: A context-assisted single shot face detector,” in Proceedings of the European conference on computer vision, 2018, pp. 797–813.
  56. S. Yang, P. Luo, C. C. Loy, and X. Tang, “Wider face: A face detection benchmark,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2016.

Summary

We haven't generated a summary for this paper yet.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.