Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

HiFormer: Hierarchical Multi-scale Representations Using Transformers for Medical Image Segmentation (2207.08518v2)

Published 18 Jul 2022 in cs.CV and cs.AI

Abstract: Convolutional neural networks (CNNs) have been the consensus for medical image segmentation tasks. However, they suffer from the limitation in modeling long-range dependencies and spatial correlations due to the nature of convolution operation. Although transformers were first developed to address this issue, they fail to capture low-level features. In contrast, it is demonstrated that both local and global features are crucial for dense prediction, such as segmenting in challenging contexts. In this paper, we propose HiFormer, a novel method that efficiently bridges a CNN and a transformer for medical image segmentation. Specifically, we design two multi-scale feature representations using the seminal Swin Transformer module and a CNN-based encoder. To secure a fine fusion of global and local features obtained from the two aforementioned representations, we propose a Double-Level Fusion (DLF) module in the skip connection of the encoder-decoder structure. Extensive experiments on various medical image segmentation datasets demonstrate the effectiveness of HiFormer over other CNN-based, transformer-based, and hybrid methods in terms of computational complexity, and quantitative and qualitative results. Our code is publicly available at: https://github.com/amirhossein-kz/HiFormer

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Moein Heidari (18 papers)
  2. Amirhossein Kazerouni (19 papers)
  3. Milad Soltany (4 papers)
  4. Reza Azad (52 papers)
  5. Ehsan Khodapanah Aghdam (13 papers)
  6. Julien Cohen-Adad (42 papers)
  7. Dorit Merhof (75 papers)
Citations (137)