Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large Kernel Distillation Network for Efficient Single Image Super-Resolution (2407.14340v1)

Published 19 Jul 2024 in eess.IV and cs.CV

Abstract: Efficient and lightweight single-image super-resolution (SISR) has achieved remarkable performance in recent years. One effective approach is the use of large kernel designs, which have been shown to improve the performance of SISR models while reducing their computational requirements. However, current state-of-the-art (SOTA) models still face problems such as high computational costs. To address these issues, we propose the Large Kernel Distillation Network (LKDN) in this paper. Our approach simplifies the model structure and introduces more efficient attention modules to reduce computational costs while also improving performance. Specifically, we employ the reparameterization technique to enhance model performance without adding extra cost. We also introduce a new optimizer from other tasks to SISR, which improves training speed and performance. Our experimental results demonstrate that LKDN outperforms existing lightweight SR methods and achieves SOTA performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Chengxing Xie (10 papers)
  2. Xiaoming Zhang (113 papers)
  3. Linze Li (19 papers)
  4. Haiteng Meng (1 paper)
  5. Tianlin Zhang (17 papers)
  6. Tianrui Li (84 papers)
  7. Xiaole Zhao (4 papers)
Citations (19)