Single-Image HDR Reconstruction Assisted Ghost Suppression and Detail Preservation Network for Multi-Exposure HDR Imaging (2403.04228v1)
Abstract: The reconstruction of high dynamic range (HDR) images from multi-exposure low dynamic range (LDR) images in dynamic scenes presents significant challenges, especially in preserving and restoring information in oversaturated regions and avoiding ghosting artifacts. While current methods often struggle to address these challenges, our work aims to bridge this gap by developing a multi-exposure HDR image reconstruction network for dynamic scenes, complemented by single-frame HDR image reconstruction. This network, comprising single-frame HDR reconstruction with enhanced stop image (SHDR-ESI) and SHDR-ESI-assisted multi-exposure HDR reconstruction (SHDRA-MHDR), effectively leverages the ghost-free characteristic of single-frame HDR reconstruction and the detail-enhancing capability of ESI in oversaturated areas. Specifically, SHDR-ESI innovatively integrates single-frame HDR reconstruction with the utilization of ESI. This integration not only optimizes the single image HDR reconstruction process but also effectively guides the synthesis of multi-exposure HDR images in SHDR-AMHDR. In this method, the single-frame HDR reconstruction is specifically applied to reduce potential ghosting effects in multiexposure HDR synthesis, while the use of ESI images assists in enhancing the detail information in the HDR synthesis process. Technically, SHDR-ESI incorporates a detail enhancement mechanism, which includes a self-representation module and a mutual-representation module, designed to aggregate crucial information from both reference image and ESI. To fully leverage the complementary information from non-reference images, a feature interaction fusion module is integrated within SHDRA-MHDR. Additionally, a ghost suppression module, guided by the ghost-free results of SHDR-ESI, is employed to suppress the ghosting artifacts.
- W. Guicquero, A. Dupret, and P. Vandergheynst, “An algorithm architecture co-design for cmos compressive high dynamic range imaging,” IEEE Transactions on Computational Imaging, vol. 2, no. 3, pp. 190–203, 2016.
- A. Gnanasambandam and S. H. Chan, “Hdr imaging with quanta image sensors: Theoretical limits and optimal reconstruction,” IEEE Transactions on Computational Imaging, vol. 6, pp. 1571–1585, 2020.
- E. Miandji, H.-N. Nguyen, S. Hajisharif, J. Unger, and C. Guillemot, “Compressive hdr light field imaging using a single multi-iso sensor,” IEEE Transactions on Computational Imaging, vol. 7, pp. 1369–1384, 2021.
- Z. Khan, M. Khanna, and S. Raman, “Fhdr: Hdr image reconstruction from a single ldr image using feedback network,” in 2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP), 2019, pp. 1–5.
- U. çoğalan and A. O. Akyüz, “Deep joint deinterlacing and denoising for single shot dual-iso hdr reconstruction,” IEEE Transactions on Image Processing, vol. 29, pp. 7511–7524, 2020.
- G. Chen, L. Zhang, M. Sun, Y. Gao, P. N. Michelini, and Y. Wu, “Single-image hdr reconstruction with task-specific network based on channel adaptive rdn,” in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2021, pp. 398–403.
- K. Q. Dinh and K. Pyo Choi, “End-to-end single-frame image signal processing for high dynamic range scenes,” in 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 2448–2457.
- K. R. Prabhakar, S. Agrawal, and R. V. Babu, “Self-gated memory recurrent network for efficient scalable hdr deghosting,” IEEE Transactions on Computational Imaging, vol. 7, pp. 1228–1239, 2021.
- S. Catley-Chandar, T. Tanay, L. Vandroux, A. Leonardis, G. Slabaugh, and E. Pérez-Pellitero, “Flexhdr: Modeling alignment and exposure uncertainties for flexible hdr imaging,” IEEE Transactions on Image Processing, vol. 31, pp. 5923–5935, 2022.
- Q. Yan, D. Gong, J. Q. Shi, A. van den Hengel, J. Sun, Y. Zhu, and Y. Zhang, “High dynamic range imaging via gradient-aware context aggregation network,” Pattern Recognition, vol. 122, p. 108342, 2022.
- L. Tang, H. Huang, Y. Zhang, G. Qi, and Z. Yu, “Structure-embedded ghosting artifact suppression network for high dynamic range image reconstruction,” Knowledge-Based Systems, vol. 263, p. 110278, 2023.
- A. O. Akyüz, R. Fleming, B. E. Riecke, E. Reinhard, and H. H. Bülthoff, “Do hdr displays support ldr content? a psychophysical evaluation,” ACM Transactions on Graphics, vol. 26, no. 3, pp. 38–es, 2007.
- F. Banterle, P. Ledda, K. Debattista, A. Chalmers, and M. Bloj, “A framework for inverse tone mapping,” The Visual Computer, vol. 23, pp. 467–478, 2007.
- Y. Huo, F. Yang, L. Dong, and V. Brost, “Physiological inverse tone mapping based on retina response,” The Visual Computer, vol. 30, pp. 507–517, 2014.
- R. P. Kovaleski and M. M. Oliveira, “High-quality reverse tone mapping for a wide range of exposures,” in 2014 27th SIBGRAPI Conference on Graphics, Patterns and Images, 2014, pp. 49–56.
- S. Ning, H. Xu, L. Song, R. Xie, and W. Zhang, “Learning an inverse tone mapping network with a generative adversarial regularizer,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2018, pp. 1383–1387.
- X. Chen, Y. Liu, Z. Zhang, Y. Qiao, and C. Dong, “Hdrunet: Single image hdr reconstruction with denoising and dequantization,” in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2021, pp. 354–363.
- L. She, M. Ye, S. Li, Y. Zhao, C. Zhu, and H. Wang, “Single-image hdr reconstruction by dual learning the camera imaging process,” Engineering Applications of Artificial Intelligence, vol. 120, p. 105947, 2023.
- G. Eilertsen, J. Kronander, G. Denes, R. K. Mantiuk, and J. Unger, “Hdr image reconstruction from a single exposure using deep cnns,” ACM Transactions on Graphics, vol. 36, no. 6, pp. 1–15, 2017.
- L. Bogoni, “Extending dynamic range of monochrome and color images through fusion,” in Proceedings 15th International Conference on Pattern Recognition (ICPR-2000), vol. 3, 2000, pp. 7–12.
- J. Hu, O. Gallo, K. Pulli, and X. Sun, “Hdr deghosting: How to deal with saturation?” in 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 1163–1170.
- S. B. Kang, M. Uyttendaele, S. Winder, and R. Szeliski, “High dynamic range video,” ACM Transactions on Graphics, vol. 22, no. 3, pp. 319–325, 2003.
- H. Zimmer, A. Bruhn, and J. Weickert, “Freehand hdr imaging of moving scenes with simultaneous resolution enhancement,” Computer Graphics Forum, vol. 30, no. 2, pp. 405–414, 2011.
- K. R. Prabhakar, R. Arora, A. Swaminathan, K. P. Singh, and R. V. Babu, “A fast, scalable, and reliable deghosting method for extreme exposure fusion,” in 2019 IEEE International Conference on Computational Photography (ICCP), 2019, pp. 1–8.
- N. K. Kalantari and R. Ramamoorthi, “Deep high dynamic range imaging of dynamic scenes,” ACM Transactions on Graphics, vol. 36, no. 4, pp. 1–12, 2017.
- Q. Yan, D. Gong, Q. Shi, A. van den Hengel, C. Shen, I. Reid, and Y. Zhang, “Attention-guided network for ghost-free high dynamic range imaging,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 1751–1760.
- Y. Deng, Q. Liu, and T. Ikenaga, “Selective kernel and motion-emphasized loss based attention-guided network for hdr imaging of dynamic scenes,” in 2020 25th International Conference on Pattern Recognition (ICPR), 2021, pp. 8976–8983.
- J. Chen, Z. Yang, T. N. Chan, H. Li, J. Hou, and L.-P. Chau, “Attention-guided progressive neural texture fusion for high dynamic range image restoration,” IEEE Transactions on Image Processing, vol. 31, pp. 2661–2672, 2022.
- Q. Yan, L. Zhang, Y. Liu, Y. Zhu, J. Sun, Q. Shi, and Y. Zhang, “Deep hdr imaging via a non-local network,” IEEE Transactions on Image Processing, vol. 29, pp. 4308–4322, 2020.
- Z. Pu, P. Guo, M. S. Asif, and Z. Ma, “Robust high dynamic range (hdr) imaging with complex motion and parallax,” in Computer Vision – ACCV 2020: 15th Asian Conference on Computer Vision, 2020, pp. 134–149.
- Z. Liu, W. Lin, X. Li, Q. Rao, T. Jiang, M. Han, H. Fan, J. Sun, and S. Liu, “Adnet: Attention-guided deformable convolutional network for high dynamic range imaging,” in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2021, pp. 463–470.
- X. Tan, H. Chen, K. Xu, C. Xu, Y. Jin, C. Zhu, and J. Zheng, “High dynamic range imaging for dynamic scenes with large-scale motions and severe saturation,” IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1–15, 2022.
- S. Baker, D. Scharstein, J. P. Lewis, S. Roth, M. J. Black, and R. Szeliski, “A database and evaluation methodology for optical flow,” International Journal of Computer Vision, vol. 92, pp. 1–31, 2011.
- Y. Niu, J. Wu, W. Liu, W. Guo, and R. W. H. Lau, “Hdr-gan: Hdr image reconstruction from multi-exposed ldr images with large motions,” IEEE Transactions on Image Processing, vol. 30, pp. 3885–3896, 2021.
- D. Marnerides, T. Bashford-Rogers, J. Hatchett, and K. Debattista, “Expandnet: A deep convolutional neural network for high dynamic range expansion from low dynamic range content,” Computer Graphics Forum, vol. 37, no. 2, pp. 37–49, 2018.
- X. Yang, K. Xu, Y. Song, Q. Zhang, X. Wei, and R. W. Lau, “Image correction via deep reciprocating hdr transformation,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 1798–1807.
- Y.-L. Liu, W.-S. Lai, Y.-S. Chen, Y.-L. Kao, M.-H. Yang, Y.-Y. Chuang, and J.-B. Huang, “Single-image hdr reconstruction by learning to reverse the camera pipeline,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 1648–1657.
- Y. Endo, Y. Kanamori, and J. Mitani, “Deep reverse tone mapping,” ACM Transactions on Graphics, vol. 36, no. 6, pp. 1–10, 2017.
- S. Lee, G. H. An, and S.-J. Kang, “Deep recursive hdri: Inverse tone mapping using generative adversarial networks,” in Computer Vision – ECCV 2018: 15th European Conference. Springer International Publishing, 2018, pp. 613–628.
- J. Kim, S. Lee, and S.-J. Kang, “End-to-end differentiable learning to hdr image synthesis for multi-exposure images,” in Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21), vol. 35, no. 2, 2021, pp. 1780–1788.
- P.-H. Le, Q. Le, R. Nguyen, and B.-S. Hua, “Single-image hdr reconstruction by multi-exposure generation,” in 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 4052–4061.
- Z. Liu, Y. Wang, B. Zeng, and S. Liu, “Ghost-free high dynamic range imaging with context-aware transformer,” in Computer Vision – ECCV 2022: 17th European Conference, 2022, pp. 344–360.
- S. Wu, J. Xu, Y.-W. Tai, and C.-K. Tang, “Deep high dynamic range imaging with large foreground motions,” in Computer Vision – ECCV 2018: 15th European Conference, 2018, p. 120–135.
- K. R. Prabhakar, S. Agrawal, D. K. Singh, B. Ashwath, and R. V. Babu, “Towards practical and efficient high-resolution hdr deghosting with cnn,” in Computer Vision – ECCV 2020: 16th European Conference, 2020, pp. 497–513.
- H. Chung and N. I. Cho, “High dynamic range imaging of dynamic scenes with saturation compensation but without explicit motion compensation,” in 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2022, pp. 61–71.
- Q. Yan, T. Hu, Y. Sun, H. Tang, Y. Zhu, W. Dong, L. Van Gool, and Y. Zhang, “Towards high-quality hdr deghosting with conditional diffusion models,” IEEE Transactions on Circuits and Systems for Video Technology, pp. 1–1, 2023.
- J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic models,” in Proceedings of the 34th International Conference on Neural Information Processing Systems ((NeurIPS 2020)), 2020, pp. 6840–6851.
- J. W. Song, Y.-I. Park, K. Kong, J. Kwak, and S.-J. Kang, “Selective transhdr: Transformer-based selective hdr imaging using ghost region mask,” in Computer Vision – ECCV 2022: 17th European Conference, 2022, pp. 288–304.
- Q. Yan, W. Chen, S. Zhang, Y. Zhu, J. Sun, and Y. Zhang, “A unified hdr imaging method with pixel and patch level,” in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 22 211–22 220.
- Q. Yan, S. Zhang, W. Chen, H. Tang, Y. Zhu, J. Sun, L. Van Gool, and Y. Zhang, “Smae: Few-shot learning for hdr deghosting with saturation-aware masked autoencoders,” in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 5775–5784.
- R. Chen, B. Zheng, H. Zhang, Q. Chen, C. Yan, G. Slabaugh, and S. Yuan, “Improving dynamic hdr imaging with fusion transformer,” in Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence (AAAI-23), vol. 37, no. 1, 2023, pp. 340–349.
- H.-J. Kwon and S.-H. Lee, “Multi-layer decomposition and synthesis of hdr images to improve high-saturation boundaries,” Mathematics, vol. 11, no. 3, p. 785, 2023.
- J. Hu, G. Choe, Z. Nadir, O. Nabil, S.-J. Lee, H. Sheikh, Y. Yoo, and M. Polley, “Sensor-realistic synthetic data engine for multi-frame high dynamic range photography,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2020, pp. 2180–2189.
- P. Sen, N. K. Kalantari, M. Yaesoubi, S. Darabi, D. B. Goldman, and E. Shechtman, “Robust patch-based hdr reconstruction of dynamic scenes,” ACM Transactions on Graphics, vol. 31, no. 6, pp. 1–11, 2012.
- O. T. Tursun, A. O. Akyüz, A. Erdem, and E. Erdem, “An objective deghosting quality metric for hdr images,” Computer Graphics Forum, vol. 35, no. 2, pp. 139–152, 2016.
- R. Mantiuk, K. J. Kim, A. G. Rempel, and W. Heidrich, “Hdr-vdp-2: A calibrated visual metric for visibility and quality predictions in all luminance conditions,” ACM Transactions on Graphics, vol. 30, no. 4, 2011.
- G. Eilertsen, S. Hajisharif, P. Hanji, A. Tsirikoglou, R. K. Mantiuk, and J. Unger, “How to cheat with metrics in single-image hdr reconstruction,” in 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 2021, pp. 3981–3990.
- K. Gu, S. Wang, G. Zhai, S. Ma, X. Yang, W. Lin, W. Zhang, and W. Gao, “Blind quality assessment of tone-mapped images via analysis of information, naturalness, and structure,” IEEE Transactions on Multimedia, vol. 18, no. 3, pp. 432–443, 2016.
- Y. Fang, H. Zhu, K. Ma, Z. Wang, and S. Li, “Perceptual evaluation for multi-exposure image fusion of dynamic scenes,” IEEE Transactions on Image Processing, vol. 29, pp. 1127–1138, 2020.