Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

UNesT: Local Spatial Representation Learning with Hierarchical Transformer for Efficient Medical Segmentation (2209.14378v2)

Published 28 Sep 2022 in eess.IV and cs.CV

Abstract: Transformer-based models, capable of learning better global dependencies, have recently demonstrated exceptional representation learning capabilities in computer vision and medical image analysis. Transformer reformats the image into separate patches and realizes global communication via the self-attention mechanism. However, positional information between patches is hard to preserve in such 1D sequences, and loss of it can lead to sub-optimal performance when dealing with large amounts of heterogeneous tissues of various sizes in 3D medical image segmentation. Additionally, current methods are not robust and efficient for heavy-duty medical segmentation tasks such as predicting a large number of tissue classes or modeling globally inter-connected tissue structures. To address such challenges and inspired by the nested hierarchical structures in vision transformer, we proposed a novel 3D medical image segmentation method (UNesT), employing a simplified and faster-converging transformer encoder design that achieves local communication among spatially adjacent patch sequences by aggregating them hierarchically. We extensively validate our method on multiple challenging datasets, consisting of multiple modalities, anatomies, and a wide range of tissue classes, including 133 structures in the brain, 14 organs in the abdomen, 4 hierarchical components in the kidneys, inter-connected kidney tumors and brain tumors. We show that UNesT consistently achieves state-of-the-art performance and evaluate its generalizability and data efficiency. Particularly, the model achieves whole brain segmentation task complete ROI with 133 tissue classes in a single network, outperforming prior state-of-the-art method SLANT27 ensembled with 27 networks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (15)
  1. Xin Yu (192 papers)
  2. Qi Yang (111 papers)
  3. Yinchi Zhou (9 papers)
  4. Riqiang Gao (29 papers)
  5. Ho Hin Lee (41 papers)
  6. Thomas Li (21 papers)
  7. Shunxing Bao (67 papers)
  8. Zhoubing Xu (21 papers)
  9. Thomas A. Lasko (21 papers)
  10. Richard G. Abramson (12 papers)
  11. Zizhao Zhang (44 papers)
  12. Yuankai Huo (160 papers)
  13. Bennett A. Landman (123 papers)
  14. Yucheng Tang (67 papers)
  15. Leon Y. Cai (12 papers)