LYT-NET: Lightweight YUV Transformer-based Network for Low-light Image Enhancement
Abstract: This letter introduces LYT-Net, a novel lightweight transformer-based model for low-light image enhancement (LLIE). LYT-Net consists of several layers and detachable blocks, including our novel blocks--Channel-Wise Denoiser (CWD) and Multi-Stage Squeeze & Excite Fusion (MSEF)--along with the traditional Transformer block, Multi-Headed Self-Attention (MHSA). In our method we adopt a dual-path approach, treating chrominance channels U and V and luminance channel Y as separate entities to help the model better handle illumination adjustment and corruption restoration. Our comprehensive evaluation on established LLIE datasets demonstrates that, despite its low complexity, our model outperforms recent LLIE methods. The source code and pre-trained models are available at https://github.com/albrateanu/LYT-Net
- W. Wang, X. Wu, X. Yuan, and Z. Gao, “An experiment-based review of low-light image enhancement methods,” Ieee Access, vol. 8, pp. 87 884–87 917, 2020.
- C. Wei, W. Wang, W. Yang, and J. Liu, “Deep retinex decomposition for low-light enhancement,” in Proceedings of the British Machine Vision Conference (BMVC), 2018.
- C. Orhei and R. Vasiu, “An analysis of extended and dilated filters in sharpening algorithms,” IEEE Access, 2023.
- C. Orhei, V. Bogdan, C. Bonchis, and R. Vasiu, “Dilated filters for edge-detection algorithms,” Applied Sciences, vol. 11, no. 22, p. 10716, 2021.
- C.-C. Orhei, “Urban landmark detection using computer vision,” Ph.D. dissertation, Universitatea Politehnica Timişoara, 2022.
- C. D. Căleanu, C. L. Sîrbu, and G. Simion, “Deep neural architectures for contrast enhanced ultrasound (CEUS) focal liver lesions automated diagnosis,” vol. 21, no. 12. MDPI, 2021, p. 4126.
- S. Vert, D. Andone, A. Ternauciuc, V. Mihaescu, O. Rotaru, M. Mocofan, C. Orhei, and R. Vasiu, “User evaluation of a multi-platform digital storytelling concept for cultural heritage,” Mathematics, vol. 9, no. 21, p. 2678, 2021.
- C. Orhei, L. Radu, M. Mocofan, S. Vert, and R. Vasiu, “Urban landmark detection using A-KAZE features and vector of aggregated local descriptors,” in 2022 International Symposium on Electronics and Telecommunications (ISETC). IEEE, 2022, pp. 1–4.
- A. Avram, I. Porobic, and P. Papazian, “An overview of intelligent surveillance systems development,” in 2018 International Symposium on Electronics and Telecommunications (ISETC), 2018, pp. 1–6.
- S. Yang, D. Zhou, J. Cao, and Y. Guo, “Rethinking low-light enhancement via transformer-gan,” IEEE Signal Processing Letters, vol. 29, pp. 1082–1086, 2022.
- E. H. Land, “The retinex theory of color vision,” Scientific american, vol. 237, no. 6, pp. 108–129, 1977.
- S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, and M.-H. Yang, “Restormer: Efficient transformer for high-resolution image restoration,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 5728–5739.
- S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, M.-H. Yang, and L. Shao, “Learning enriched features for real image restoration and enhancement,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXV 16. Springer, 2020, pp. 492–511.
- D. J. Jobson, Z.-u. Rahman, and G. A. Woodell, “Properties and performance of a center/surround retinex,” IEEE transactions on image processing, vol. 6, no. 3, pp. 451–462, 1997.
- Z.-u. Rahman, D. J. Jobson, and G. A. Woodell, “Multi-scale retinex for color image enhancement,” in Proceedings of 3rd IEEE international conference on image processing, vol. 3. IEEE, 1996, pp. 1003–1006.
- D. J. Jobson, Z.-u. Rahman, and G. A. Woodell, “A multiscale retinex for bridging the gap between color images and the human observation of scenes,” IEEE Transactions on Image processing, vol. 6, no. 7, pp. 965–976, 1997.
- Y. Zhang, J. Zhang, and X. Guo, “Kindling the darkness: A practical low-light image enhancer,” in Proceedings of the 27th ACM international conference on multimedia, 2019, pp. 1632–1640.
- X. Yi, H. Xu, H. Zhang, L. Tang, and J. Ma, “Diff-retinex: Rethinking low-light image enhancement with a generative diffusion model,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 12 302–12 311.
- L. Zhu, Y. Chen, P. Ghamisi, and J. A. Benediktsson, “Generative adversarial networks for hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 56, no. 9, pp. 5046–5063, 2018.
- Y. Jiang, X. Gong, D. Liu, Y. Cheng, C. Fang, X. Shen, J. Yang, P. Zhou, and Z. Wang, “Enlightengan: Deep light enhancement without paired supervision,” IEEE transactions on image processing, vol. 30, pp. 2340–2349, 2021.
- X. Xu, R. Wang, C.-W. Fu, and J. Jia, “SNR-aware low-light image enhancement,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 17 714–17 724.
- C. Chen, Q. Chen, M. N. Do, and V. Koltun, “Seeing motion in the dark,” in Proceedings of the IEEE/CVF International conference on computer vision, 2019, pp. 3185–3194.
- H. Zeng, J. Cai, L. Li, Z. Cao, and L. Zhang, “Learning image-adaptive 3d lookup tables for high performance photo enhancement in real-time,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 4, pp. 2058–2073, 2020.
- R. Wang, Q. Zhang, C.-W. Fu, X. Shen, W.-S. Zheng, and J. Jia, “Underexposed photo enhancement using deep illumination estimation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 6849–6857.
- S. Moran, P. Marza, S. McDonagh, S. Parisot, and G. Slabaugh, “DeepLPF: Deep local parametric filters for image enhancement,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 12 826–12 835.
- Z. Wang, X. Cun, J. Bao, W. Zhou, J. Liu, and H. Li, “Uformer: A general u-shaped transformer for image restoration,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 17 683–17 693.
- W. Yang, W. Wang, H. Huang, S. Wang, and J. Liu, “Sparse gradient regularized deep retinex network for robust low-light image enhancement,” IEEE Transactions on Image Processing, vol. 30, pp. 2072–2086, 2021.
- R. Liu, L. Ma, J. Zhang, X. Fan, and Z. Luo, “Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 10 561–10 570.
- K. Xu, X. Yang, B. Yin, and R. W. Lau, “Learning to restore low-light images via decomposition-and-enhancement,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 2281–2290.
- X. Guo, Y. Li, and H. Ling, “Lime: Low-light image enhancement via illumination map estimation,” IEEE Transactions on image processing, vol. 26, no. 2, pp. 982–993, 2016.
- X. Fu, D. Zeng, Y. Huang, Y. Liao, X. Ding, and J. Paisley, “A fusion-based enhancing method for weakly illuminated images,” Signal Processing, vol. 129, pp. 82–96, 2016.
- X. Dong, Y. Pang, and J. Wen, “Fast efficient algorithm for enhancement of low lighting video,” in ACM SIGGRAPH 2010 Posters, 2010, pp. 1–1.
- S. Wang, J. Zheng, H.-M. Hu, and B. Li, “Naturalness preserved enhancement algorithm for non-uniform illumination images,” IEEE transactions on image processing, vol. 22, no. 9, pp. 3538–3548, 2013.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.