Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SUNet: Swin Transformer UNet for Image Denoising (2202.14009v1)

Published 28 Feb 2022 in eess.IV, cs.CV, and cs.LG

Abstract: Image restoration is a challenging ill-posed problem which also has been a long-standing issue. In the past few years, the convolution neural networks (CNNs) almost dominated the computer vision and had achieved considerable success in different levels of vision tasks including image restoration. However, recently the Swin Transformer-based model also shows impressive performance, even surpasses the CNN-based methods to become the state-of-the-art on high-level vision tasks. In this paper, we proposed a restoration model called SUNet which uses the Swin Transformer layer as our basic block and then is applied to UNet architecture for image denoising. The source code and pre-trained models are available at https://github.com/FanChiMao/SUNet.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Chi-Mao Fan (4 papers)
  2. Tsung-Jung Liu (5 papers)
  3. Kuan-Hsien Liu (5 papers)
Citations (85)

Summary

  • The paper introduces SUNet, which leverages Swin Transformer layers within a UNet architecture to improve image denoising.
  • It proposes a dual up-sample block that combines subpixel and bilinear methods to mitigate common checkerboard artifacts.
  • Extensive experiments demonstrate competitive SSIM scores and image fidelity on standard benchmarks, underscoring its robust performance.

Overview of SUNet: Swin Transformer UNet for Image Denoising

The paper "SUNet: Swin Transformer UNet for Image Denoising" by Chi-Mao Fan et al. introduces a novel approach to image denoising by integrating Swin Transformer layers into the UNet architecture. This research addresses the limitations of traditional convolutional neural network (CNN)-based methods in capturing global image information, which is crucial for effective image restoration.

Methodology

SUNet utilizes the Swin Transformer as the backbone, which has demonstrated state-of-the-art performance in various high-level vision tasks such as image classification and segmentation. Swin Transformer's hierarchical structure and ability to model long-range dependencies make it a promising candidate for image restoration tasks. In this work, the authors propose a dual up-sample block architecture that integrates subpixel and bilinear up-sampling techniques to mitigate checkerboard artifacts commonly associated with traditional transpose convolution methods.

The architecture of SUNet consists of three main modules:

  1. Shallow Feature Extraction: Employs a 3×33 \times 3 convolution to capture low-frequency image information.
  2. UNet Feature Extraction: Utilizes a modified UNet structure where Swin Transformer Blocks replace standard convolutional layers to extract high-level semantic features.
  3. Reconstruction Module: Another 3×33 \times 3 convolution is used to reconstruct the denoised image from the deep features obtained in the previous module.

A significant contribution of SUNet is the innovative use of Swin Transformer Layers in low-level vision tasks such as image denoising, where preserving both local and global image details is vital.

Experimental Results

The authors conducted extensive experiments using common datasets such as CBSD68 and Kodak24 to evaluate the performance of SUNet against several state-of-the-art methods, including traditional prior-based and CNN-based architectures. The primary evaluation metrics were PSNR and SSIM, both critical in assessing image fidelity and structural similarity.

For noise levels σ=10\sigma = 10, σ=30\sigma = 30, and σ=50\sigma = 50, SUNet exhibited competitive performance, achieving SSIM scores that were among the best or second-best compared to other methods, although the PSNR values were slightly less dominant. Nevertheless, the efficiency and competitive qualitative results demonstrate the compelling advantages of using Swin Transformer for image denoising.

Implications and Future Work

The integration of Swin Transformer into the UNet architecture for image denoising marks a significant step in leveraging transformer-based models for low-level vision tasks. This approach offers a robust framework that outperforms several well-established CNN-based models, particularly in maintaining structural image details.

Future research could explore the application of SUNet to more complex image restoration tasks involving real-world noise and blur. Additionally, further reduction in computational overhead and optimization of the Swin Transformer layers, considering both parameter efficiency and processing speed, could enhance the model's deployment in practical scenarios.

Overall, SUNet presents a promising direction for future developments in the application of transformer models to image restoration tasks. The ongoing evolution of hardware capabilities and algorithmic innovation will likely continue to bridge the gap between transformer models and traditional CNNs, paving the way for more advanced and efficient image processing techniques.

Github Logo Streamline Icon: https://streamlinehq.com