Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SnowFormer: Context Interaction Transformer with Scale-awareness for Single Image Desnowing (2208.09703v3)

Published 20 Aug 2022 in cs.CV

Abstract: Due to various and complicated snow degradations, single image desnowing is a challenging image restoration task. As prior arts can not handle it ideally, we propose a novel transformer, SnowFormer, which explores efficient cross-attentions to build local-global context interaction across patches and surpasses existing works that employ local operators or vanilla transformers. Compared to prior desnowing methods and universal image restoration methods, SnowFormer has several benefits. Firstly, unlike the multi-head self-attention in recent image restoration Vision Transformers, SnowFormer incorporates the multi-head cross-attention mechanism to perform local-global context interaction between scale-aware snow queries and local-patch embeddings. Second, the snow queries in SnowFormer are generated by the query generator from aggregated scale-aware features, which are rich in potential clean cues, leading to superior restoration results. Third, SnowFormer outshines advanced state-of-the-art desnowing networks and the prevalent universal image restoration transformers on six synthetic and real-world datasets. The code is released in \url{https://github.com/Ephemeral182/SnowFormer}.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Sixiang Chen (28 papers)
  2. Tian Ye (65 papers)
  3. Yun Liu (213 papers)
  4. Erkang Chen (16 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.