Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DAE-Former: Dual Attention-guided Efficient Transformer for Medical Image Segmentation (2212.13504v3)

Published 27 Dec 2022 in cs.CV

Abstract: Transformers have recently gained attention in the computer vision domain due to their ability to model long-range dependencies. However, the self-attention mechanism, which is the core part of the Transformer model, usually suffers from quadratic computational complexity with respect to the number of tokens. Many architectures attempt to reduce model complexity by limiting the self-attention mechanism to local regions or by redesigning the tokenization process. In this paper, we propose DAE-Former, a novel method that seeks to provide an alternative perspective by efficiently designing the self-attention mechanism. More specifically, we reformulate the self-attention mechanism to capture both spatial and channel relations across the whole feature dimension while staying computationally efficient. Furthermore, we redesign the skip connection path by including the cross-attention module to ensure the feature reusability and enhance the localization power. Our method outperforms state-of-the-art methods on multi-organ cardiac and skin lesion segmentation datasets without requiring pre-training weights. The code is publicly available at https://github.com/mindflow-institue/DAEFormer.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Reza Azad (52 papers)
  2. René Arimond (2 papers)
  3. Ehsan Khodapanah Aghdam (13 papers)
  4. Amirhossein Kazerouni (19 papers)
  5. Dorit Merhof (75 papers)
Citations (57)

Summary

We haven't generated a summary for this paper yet.