Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DRFN: Deep Recurrent Fusion Network for Single-Image Super-Resolution with Large Factors (1908.08837v1)

Published 23 Aug 2019 in cs.CV and eess.IV

Abstract: Recently, single-image super-resolution has made great progress owing to the development of deep convolutional neural networks (CNNs). The vast majority of CNN-based models use a pre-defined upsampling operator, such as bicubic interpolation, to upscale input low-resolution images to the desired size and learn non-linear mapping between the interpolated image and ground truth high-resolution (HR) image. However, interpolation processing can lead to visual artifacts as details are over-smoothed, particularly when the super-resolution factor is high. In this paper, we propose a Deep Recurrent Fusion Network (DRFN), which utilizes transposed convolution instead of bicubic interpolation for upsampling and integrates different-level features extracted from recurrent residual blocks to reconstruct the final HR images. We adopt a deep recurrence learning strategy and thus have a larger receptive field, which is conducive to reconstructing an image more accurately. Furthermore, we show that the multi-level fusion structure is suitable for dealing with image super-resolution problems. Extensive benchmark evaluations demonstrate that the proposed DRFN performs better than most current deep learning methods in terms of accuracy and visual effects, especially for large-scale images, while using fewer parameters.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Xin Yang (314 papers)
  2. Haiyang Mei (14 papers)
  3. Jiqing Zhang (9 papers)
  4. Ke Xu (309 papers)
  5. Baocai Yin (81 papers)
  6. Qiang Zhang (466 papers)
  7. Xiaopeng Wei (16 papers)
Citations (92)

Summary

We haven't generated a summary for this paper yet.