Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Swift Parameter-free Attention Network for Efficient Super-Resolution (2311.12770v3)

Published 21 Nov 2023 in eess.IV and cs.CV

Abstract: Single Image Super-Resolution (SISR) is a crucial task in low-level computer vision, aiming to reconstruct high-resolution images from low-resolution counterparts. Conventional attention mechanisms have significantly improved SISR performance but often result in complex network structures and large number of parameters, leading to slow inference speed and large model size. To address this issue, we propose the Swift Parameter-free Attention Network (SPAN), a highly efficient SISR model that balances parameter count, inference speed, and image quality. SPAN employs a novel parameter-free attention mechanism, which leverages symmetric activation functions and residual connections to enhance high-contribution information and suppress redundant information. Our theoretical analysis demonstrates the effectiveness of this design in achieving the attention mechanism's purpose. We evaluate SPAN on multiple benchmarks, showing that it outperforms existing efficient super-resolution models in terms of both image quality and inference speed, achieving a significant quality-speed trade-off. This makes SPAN highly suitable for real-world applications, particularly in resource-constrained scenarios. Notably, we won the first place both in the overall performance track and runtime track of the NTIRE 2024 efficient super-resolution challenge. Our code and models are made publicly available at https://github.com/hongyuanyu/SPAN.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Cheng Wan (48 papers)
  2. Hongyuan Yu (21 papers)
  3. Zhiqi Li (42 papers)
  4. Yajun Zou (5 papers)
  5. Yuqing Liu (28 papers)
  6. Xuanwu Yin (12 papers)
  7. Kunlong Zuo (6 papers)
  8. YiHang Chen (29 papers)
Citations (13)

Summary

We haven't generated a summary for this paper yet.