Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Channel Attention and Multi-level Features Fusion for Single Image Super-Resolution (1810.06935v1)

Published 16 Oct 2018 in cs.CV

Abstract: Convolutional neural networks (CNNs) have demonstrated superior performance in super-resolution (SR). However, most CNN-based SR methods neglect the different importance among feature channels or fail to take full advantage of the hierarchical features. To address these issues, this paper presents a novel recursive unit. Firstly, at the beginning of each unit, we adopt a compact channel attention mechanism to adaptively recalibrate the channel importance of input features. Then, the multi-level features, rather than only deep-level features, are extracted and fused. Additionally, we find that it will force our model to learn more details by using the learnable upsampling method (i.e., transposed convolution) only on residual branch (instead of using it both on residual branch and identity branch) while using the bicubic interpolation on the other branch. Analytic experiments show that our method achieves competitive results compared with the state-of-the-art methods and maintains faster speed as well.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yue Lu (37 papers)
  2. Yun Zhou (39 papers)
  3. Zhuqing Jiang (14 papers)
  4. Xiaoqiang Guo (2 papers)
  5. Zixuan Yang (16 papers)
Citations (21)

Summary

We haven't generated a summary for this paper yet.