Papers
Topics
Authors
Recent
Search
2000 character limit reached

A New Journey from SDRTV to HDRTV

Published 18 Aug 2021 in eess.IV and cs.CV | (2108.07978v2)

Abstract: Nowadays modern displays are capable to render video content with high dynamic range (HDR) and wide color gamut (WCG). However, most available resources are still in standard dynamic range (SDR). Therefore, there is an urgent demand to transform existing SDR-TV contents into their HDR-TV versions. In this paper, we conduct an analysis of SDRTV-to-HDRTV task by modeling the formation of SDRTV/HDRTV content. Base on the analysis, we propose a three-step solution pipeline including adaptive global color mapping, local enhancement and highlight generation. Moreover, the above analysis inspires us to present a lightweight network that utilizes global statistics as guidance to conduct image-adaptive color mapping. In addition, we construct a dataset using HDR videos in HDR10 standard, named HDRTV1K, and select five metrics to evaluate the results of SDRTV-to-HDRTV algorithms. Furthermore, our final results achieve state-of-the-art performance in quantitative comparisons and visual quality. The code and dataset are available at https://github.com/chxy95/HDRTVNet.

Citations (52)

Summary

  • The paper introduces a three-step conversion pipeline that combines adaptive global color mapping, local enhancement, and highlight generation to effectively transform SDR content to HDR.
  • The paper details a comprehensive framework incorporating tone mapping, gamut mapping, and opto-electronic transfer functions to differentiate SDR and HDR content formation.
  • The paper validates its approach using the HDRTV1K dataset and metrics such as PSNR, SSIM, and HDR-VDP3, demonstrating superior visual quality and performance compared to existing methods.

SDRTV-to-HDRTV Conversion: Methodological Approaches and Practical Implications

This paper addresses the increasingly relevant task of converting video content from Standard Dynamic Range (SDR) to High Dynamic Range (HDR), a challenge driven by the technological advancements enabling modern displays to showcase video content with high dynamic range and wide color gamut. As most of the existing content remains in SDR format, the paper identifies an urgent necessity to develop robust methodologies for SDR-to-HDR conversion, laying the foundation for innovative solutions in this domain.

The authors begin by delineating the task of SDRTV-to-HDRTV conversion, emphasizing its practical importance. Notably, they articulate a model of the SDR and HDR content formation that differentiates their respective processing pipelines. The paper introduces a conceptual framework consisting of tone mapping, gamut mapping, opto-electronic transfer functions, and quantization, thereby forming the core underpinnings of the proposed SDRTV-to-HDRTV solution pipeline.

The methodological innovation of this paper lies in its three-step solution pipeline: adaptive global color mapping (AGCM), local enhancement (LE), and highlight generation (HG). The proposed AGCM step is particularly noteworthy, introducing a lightweight network that uses global statistics for image-adaptive color mapping. The deployment of global operations enables this component to achieve impressive performance with minimal computational overhead. It is this understanding of the formation pipeline that directs the authors’ methodological choices, utilizing the principle of "divide-and-conquer" to address the highly ill-posed nature of the task.

The paper's empirical contribution is encapsulated in the introduction of the HDRTV1K dataset, comprised of HDR videos following the HDR10 standard, coupled with the selection of five metrics for evaluating SDRTV-to-HDRTV algorithms: PSNR, SSIM, SR-SIM, ΔEITP\Delta E_{ITP}, and HDR-VDP3. The choice of these metrics underscores the authors' comprehensive approach, aiming to capture nuanced aspects such as mapping accuracy, structural similarity, color difference, and visual quality.

Quantitative results from the study demonstrate state-of-the-art performance across these metrics, with the proposed HDRTVNet algorithm achieving superior visual quality and quantitative benchmarks when compared to existing methods. Specifically, the adaptive global color mapping demonstrated significant improvements when bench-marked against numerous alternatives, achieving a second-best score in SSIM and outperforming with fewer parameters. Furthermore, the proposed method showed a harmonious balance even when integrating local enhancement and highlight generation.

The paper also makes bold claims in acknowledging the task’s complexity compared to traditional LDR-to-HDR reconstruction, which fundamentally differs in its objectives and methodologies. This distinction is crucial for the broader research community, which has been primarily focused on HDR scene luminance prediction rather than video content conversion as defined by current TV standards. This claim consolidates the paper's contribution, advocating for an increased focus on SDRTV-to-HDRTV conversion as a distinct research area.

Looking forward, the implications of this research are two-fold. Practically, advancing SDRTV-to-HDRTV conversion offers substantial value for industries dependent on legacy content, facilitating their transition to HDR formats. Theoretically, the proposed pipeline and methodology may inspire future work aimed at improving real-time video processing technologies and adaptive content display. Given the rapidly evolving landscape of video display technologies, further research might explore integrating AI-driven techniques to enhance conversion results and real-time processing capabilities.

In conclusion, this paper makes a significant contribution to the field of computer vision, offering a comprehensive methodology that bridges an essential gap in the conversion of SDR content to HDR. By drawing on well-established image processing principles and introducing novel techniques, the authors offer a robust foundation upon which future research and technological development can build.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.