Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DDT: Dual-branch Deformable Transformer for Image Denoising (2304.06346v1)

Published 13 Apr 2023 in cs.CV and cs.AI

Abstract: Transformer is beneficial for image denoising tasks since it can model long-range dependencies to overcome the limitations presented by inductive convolutional biases. However, directly applying the transformer structure to remove noise is challenging because its complexity grows quadratically with the spatial resolution. In this paper, we propose an efficient Dual-branch Deformable Transformer (DDT) denoising network which captures both local and global interactions in parallel. We divide features with a fixed patch size and a fixed number of patches in local and global branches, respectively. In addition, we apply deformable attention operation in both branches, which helps the network focus on more important regions and further reduces computational complexity. We conduct extensive experiments on real-world and synthetic denoising tasks, and the proposed DDT achieves state-of-the-art performance with significantly fewer computational costs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Kangliang Liu (2 papers)
  2. Xiangcheng Du (11 papers)
  3. Sijie Liu (3 papers)
  4. Yingbin Zheng (18 papers)
  5. Xingjiao Wu (26 papers)
  6. Cheng Jin (76 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.