CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement
Abstract: Low-light image enhancement (LLIE) aims to improve low-illumination images. However, existing methods face two challenges: (1) uncertainty in restoration from diverse brightness degradations; (2) loss of texture and color information caused by noise suppression and light enhancement. In this paper, we propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement to address these challenges. In particular, we reframe LLIE as learning an image-to-code mapping from low-light images to discrete codebook, which has been learned from high-quality images. To enhance this process, a Semantic Embedding Module (SEM) is introduced to integrate semantic information with low-level features, and a Codebook Shift (CS) mechanism, designed to adapt the pre-learned codebook to better suit the distinct characteristics of our low-light dataset. Additionally, we present an Interactive Feature Transformation (IFT) module to refine texture and color information during image reconstruction, allowing for interactive enhancement based on user preferences. Extensive experiments on both real-world and synthetic benchmarks demonstrate that the incorporation of prior knowledge and controllable information transfer significantly enhances LLIE performance in terms of quality and fidelity. The proposed CodeEnhance exhibits superior robustness to various degradations, including uneven illumination, noise, and color distortion.
- C. Wei, W. Wang, W. Yang, and J. Liu, “Deep retinex decomposition for low-light enhancement,” in British Machine Vision Conference, 2018.
- X. Xu, R. Wang, C.-W. Fu, and J. Jia, “Snr-aware low-light image enhancement,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 17 693–17 703.
- X. Xu, R. Wang, and J. Lu, “Low-light image enhancement via structure modeling and guidance,” in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 9893–9903.
- S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, M.-H. Yang, and L. Shao, “Learning enriched features for fast image restoration and enhancement,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 2, pp. 1934–1948, 2023.
- Y. P. Loh and C. S. Chan, “Getting to know low-light images with the exclusively dark dataset,” Computer Vision and Image Understanding, vol. 178, pp. 30–42, 2019.
- K. G. Lore, A. Akintayo, and S. Sarkar, “Llnet: A deep autoencoder approach to natural low-light image enhancement,” Pattern Recognition, vol. 61, pp. 650–662, 2017.
- S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. ter Haar Romeny, J. B. Zimmerman, and K. Zuiderveld, “Adaptive histogram equalization and its variations,” Computer vision, graphics, and image processing, vol. 39, no. 3, pp. 355–368, 1987.
- D. Jobson, Z. Rahman, and G. Woodell, “Properties and performance of a center/surround retinex,” IEEE Transactions on Image Processing, vol. 6, no. 3, pp. 451–462, 1997.
- E. H. Land and J. J. McCann, “Lightness and retinex theory.” Journal of the Optical Society of America, vol. 61 1, pp. 1–11, 1971.
- R. Wan, B. Shi, W. Yang, B. Wen, L.-Y. Duan, and A. C. Kot, “Purifying low-light images via near-infrared enlightened image,” IEEE Transactions on Multimedia, vol. 25, pp. 8006–8019, 2023.
- J. Xu, M. Yuan, D.-M. Yan, and T. Wu, “Illumination guided attentive wavelet network for low-light image enhancement,” IEEE Transactions on Multimedia, vol. 25, pp. 6258–6271, 2023.
- Y. Jiang, X. Gong, D. Liu, Y. Cheng, C. Fang, X. Shen, J. Yang, P. Zhou, and Z. Wang, “Enlightengan: Deep light enhancement without paired supervision,” IEEE Transactions on Image Processing, vol. 30, pp. 2340–2349, 2021.
- K. Wu, J. Huang, Y. Ma, F. Fan, and J. Ma, “Cycle-retinex: Unpaired low-light image enhancement via retinex-inline cyclegan,” IEEE Transactions on Multimedia, vol. 26, pp. 1213–1228, 2024.
- L. Ma, R. Liu, J. Zhang, X. Fan, and Z. Luo, “Learning deep context-sensitive decomposition for low-light image enhancement,” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 10, pp. 5666–5680, 2022.
- S. Lin, F. Tang, W. Dong, X. Pan, and C. Xu, “Smnet: Synchronous multi-scale low light enhancement network with local and global concern,” IEEE Transactions on Multimedia, vol. 25, pp. 9506–9517, 2023.
- S. Zhou, K. Chan, C. Li, and C. C. Loy, “Towards robust blind face restoration with codebook lookup transformer,” in Advances in Neural Information Processing Systems, vol. 35, 2022, pp. 30 599–30 611.
- T. Zhao and X. Wu, “Pyramid feature attention network for saliency detection,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 3080–3089.
- P. Esser, R. Rombach, and B. Ommer, “Taming transformers for high-resolution image synthesis,” in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 12 868–12 878.
- S. Hao, X. Han, Y. Guo, X. Xu, and M. Wang, “Low-light image enhancement with semi-decoupled decomposition,” IEEE Transactions on Multimedia, vol. 22, no. 12, pp. 3025–3038, 2020.
- T. Celik and T. Tjahjadi, “Contextual and variational contrast enhancement,” IEEE Transactions on Image Processing, vol. 20, no. 12, pp. 3431–3441, 2011.
- J. Stark, “Adaptive image contrast enhancement using generalizations of histogram equalization,” IEEE Transactions on Image Processing, vol. 9, no. 5, pp. 889–896, 2000.
- T. Arici, S. Dikbas, and Y. Altunbasak, “A histogram modification framework and its application for image contrast enhancement,” IEEE Transactions on Image Processing, vol. 18, no. 9, pp. 1921–1935, 2009.
- L. Ma, R. Liu, Y. Wang, X. Fan, and Z. Luo, “Low-light image enhancement via self-reinforced retinex projection model,” IEEE Transactions on Multimedia, vol. 25, pp. 3573–3586, 2023.
- F. Lv, Y. Li, and F. Lu, “Attention guided low-light image enhancement with a large scale low-light simulation dataset,” International Journal of Computer Vision, vol. 129, no. 7, pp. 2175–2193, 2021.
- X. Wu, Z. Lai, S. Yu, J. Zhou, Z. Liang, and L. Shen, “Coarse-to-fine low-light image enhancement with light restoration and color refinement,” IEEE Transactions on Emerging Topics in Computational Intelligence, pp. 1–13, 2023.
- L. Ma, T. Ma, R. Liu, X. Fan, and Z. Luo, “Toward fast, flexible, and robust low-light image enhancement,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5627–5636.
- W. Wu, J. Weng, P. Zhang, X. Wang, W. Yang, and J. Jiang, “Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5891–5900.
- Y. Cai, H. Bian, J. Lin, H. Wang, R. Timofte, and Y. Zhang, “Retinexformer: One-stage retinex-based transformer for low-light image enhancement,” in 2023 Proceedings of the IEEE/CVF International Conference on Computer Vision, October 2023, pp. 12 504–12 513.
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in 2017 Conference on Neural Information Processing Systems, 2017, pp. 5998–6008.
- A. van den Oord, O. Vinyals, and k. kavukcuoglu, “Neural discrete representation learning,” in Advances in Neural Information Processing Systems, vol. 30, 2017.
- C. Chen, X. Shi, Y. Qin, X. Li, X. Han, T. Yang, and S. Guo, “Real-world blind super-resolution via feature matching with implicit high-resolution priors,” in 2022 ACM International Conference on Multimedia, 2022.
- R.-Q. Wu, Z.-P. Duan, C.-L. Guo, Z. Chai, and C. Li, “Ridcp: Revitalizing real image dehazing via high-quality codebook priors,” in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 22 282–22 291.
- B. Guo, X. Zhang, H. Wu, Y. Wang, Y. Zhang, and Y.-F. Wang, “Lar-sr: A local autoregressive model for image super-resolution,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 1899–1908.
- Y. Gu, X. Wang, L. Xie, C. Dong, G. Li, Y. Shan, and M.-M. Cheng, “Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder,” in 2022 European Conference on Computer Vision. Springer, 2022, pp. 126–143.
- L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam, “Rethinking atrous convolution for semantic image segmentation,” arXiv preprint arXiv:1706.05587, 2017.
- E. Agustsson and R. Timofte, “Ntire 2017 challenge on single image super-resolution: Dataset and study,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2017, pp. 1122–1131.
- B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced deep residual networks for single image super-resolution,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2017, pp. 1132–1140.
- J. Hai, Z. Xuan, R. Yang, Y. Hao, F. Zou, F. Lin, and S. Han, “R2rnet: Low-light image enhancement via real-low to real-normal network,” Journal of Visual Communication and Image Representation, vol. 90, p. 103712, 2023.
- X. Huang and S. Belongie, “Arbitrary style transfer in real-time with adaptive instance normalization,” in 2017 IEEE International Conference on Computer Vision, 2017, pp. 1510–1519.
- R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 586–595.
- V. Bychkovsky, S. Paris, E. Chan, and F. Durand, “Learning photographic global tonal adjustment with a database of input / output image pairs,” in 2011 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2011, pp. 97–104.
- X. Guo, Y. Li, and H. Ling, “Lime: Low-light image enhancement via illumination map estimation,” IEEE Transactions on Image Processing, vol. 26, no. 2, pp. 982–993, 2016.
- X. Ren, M. Li, W.-H. Cheng, and J. Liu, “Joint enhancement and denoising method via sequential decomposition,” in the IEEE International Symposium on Circuits and Systems, 2018, pp. 1–5.
- Y. Zhang, J. Zhang, and X. Guo, “Kindling the darkness: A practical low-light image enhancer,” in 2019 Proceedings of the ACM international conference on multimedia, 2019, pp. 1632–1640.
- C. Guo, C. Li, J. Guo, C. C. Loy, J. Hou, S. Kwong, and R. Cong, “Zero-reference deep curve estimation for low-light image enhancement,” in the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 1780–1789.
- Z. Fu, Y. Yang, X. Tu, Y. Huang, X. Ding, and K.-K. Ma, “Learning a simple low-light image enhancer from paired low-light instances,” in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 22 252–22 261.
- D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in 2015 International Conference on Learning Representations, Y. Bengio and Y. LeCun, Eds., 2015.
- Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004.
- W. Liu, G. Ren, R. Yu, S. Guo, J. Zhu, and L. Zhang, “Image-adaptive yolo for object detection in adverse weather conditions,” in 2022 Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 2, 2022, pp. 1792–1800.
- C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “Yolov7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors,” in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 7464–7475.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.