Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Masked Autoencoders as Image Processors (2303.17316v1)

Published 30 Mar 2023 in cs.CV

Abstract: Transformers have shown significant effectiveness for various vision tasks including both high-level vision and low-level vision. Recently, masked autoencoders (MAE) for feature pre-training have further unleashed the potential of Transformers, leading to state-of-the-art performances on various high-level vision tasks. However, the significance of MAE pre-training on low-level vision tasks has not been sufficiently explored. In this paper, we show that masked autoencoders are also scalable self-supervised learners for image processing tasks. We first present an efficient Transformer model considering both channel attention and shifted-window-based self-attention termed CSformer. Then we develop an effective MAE architecture for image processing (MAEIP) tasks. Extensive experimental results show that with the help of MAEIP pre-training, our proposed CSformer achieves state-of-the-art performance on various image processing tasks, including Gaussian denoising, real image denoising, single-image motion deblurring, defocus deblurring, and image deraining.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Huiyu Duan (38 papers)
  2. Wei Shen (181 papers)
  3. Xiongkuo Min (139 papers)
  4. Danyang Tu (8 papers)
  5. Long Teng (16 papers)
  6. Jia Wang (163 papers)
  7. Guangtao Zhai (231 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.