Papers
Topics
Authors
Recent
Search
2000 character limit reached

Image Super-Resolution by Neural Texture Transfer

Published 3 Mar 2019 in cs.CV | (1903.00834v2)

Abstract: Due to the significant information loss in low-resolution (LR) images, it has become extremely challenging to further advance the state-of-the-art of single image super-resolution (SISR). Reference-based super-resolution (RefSR), on the other hand, has proven to be promising in recovering high-resolution (HR) details when a reference (Ref) image with similar content as that of the LR input is given. However, the quality of RefSR can degrade severely when Ref is less similar. This paper aims to unleash the potential of RefSR by leveraging more texture details from Ref images with stronger robustness even when irrelevant Ref images are provided. Inspired by the recent work on image stylization, we formulate the RefSR problem as neural texture transfer. We design an end-to-end deep model which enriches HR details by adaptively transferring the texture from Ref images according to their textural similarity. Instead of matching content in the raw pixel space as done by previous methods, our key contribution is a multi-level matching conducted in the neural space. This matching scheme facilitates multi-scale neural transfer that allows the model to benefit more from those semantically related Ref patches, and gracefully degrade to SISR performance on the least relevant Ref inputs. We build a benchmark dataset for the general research of RefSR, which contains Ref images paired with LR inputs with varying levels of similarity. Both quantitative and qualitative evaluations demonstrate the superiority of our method over state-of-the-art.

Citations (258)

Summary

  • The paper presents SRNTT, a multi-scale neural texture transfer method that improves reference-based super-resolution.
  • It employs multi-level neural matching to adaptively transfer textures from semantically related references, outperforming prior methods.
  • Experiments on the CUFED5 dataset show significant improvements in PSNR and SSIM, demonstrating the model's robustness under challenging conditions.

Image Super-Resolution by Neural Texture Transfer

The paper presents a novel approach to reference-based super-resolution (RefSR) where super-resolving an image is assisted using reference images. The conventional single image super-resolution (SISR) often encounters challenges such as blurry results with high upscaling factors due to significant information loss in low-resolution images. The proposed method, named Super-Resolution by Neural Texture Transfer (SRNTT), leverages neural texture transfer based on more recent advances in image stylization to address these issues. The SRNTT aims to improve upon RefSR methods by adaptively transferring textures from reference images, allowing the model to benefit from semantically related patches while also gracefully degrading to SISR when irrelevant references are used.

SRNTT's core contribution is its multi-scale neural transfer facilitated by multi-level matching in the neural space instead of raw pixel space, as done by previous methods. This permits more generalized and robust texture transfer regardless of reference-image relevance. The model is designed as an end-to-end deep learning framework incorporating multiple layers of reference feature extraction, matching, and integration to synthesize convincing high-resolution images.

Strong Numerical Results and Claims

The SRNTT method shows notable performance improvements in conducted experiments. It utilizes a novel dataset, CUFED5, curated for the diversity and variation in reference-image similarity levels. SRNTT significantly outperforms state-of-the-art methods like SRGAN~\cite{ledig2017photo} and CrossNet~\cite{zheng2018crossnet} in both qualitative and quantitative evaluations, including PSNR and SSIM metrics. Particularly, the experiment results demonstrated the model's capability to recover finer textures under challenging conditions (i.e., dissimilar reference images), hinting at its satisfying adaptation and robustness.

Implications and Future Work

Practically, SRNTT provides a more effective and flexible solution for RefSR tasks in scenarios where high-quality reference images might not be perfectly aligned or content-relevant. Theoretically, this work nudges a shift from pixel-based reference matching to more variable and context-aware neural feature-based processes.

Future research directions could explore optimizing the multi-scale texture integration further, diversifying neural feature extraction layers, and refining matching processes. Moreover, the problem could benefit from better understanding the relations and hierarchies between different feature maps for texture representation. The research community might also consider integrating SRNTT within larger frameworks or pipelines, allowing it to work more seamlessly within complex image-processing systems or for application-specific tasks like medical image enhancement or video processing.

Conclusion

This work makes a substantial contribution to advancing RefSR methodologies through a data-driven approach. While offering theoretical agility by venturing into the unexplored territory of neural texture adaptation, it also aligns with practical needs by robustly recovering image details. Continued exploration and improvement will likely unfold more potential of neural approaches for RefSR, shaping future advancements in image processing and computer vision.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.