Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Data-scalable Transformer for Medical Image Segmentation: Architecture, Model Efficiency, and Benchmark (2203.00131v5)

Published 28 Feb 2022 in eess.IV and cs.CV

Abstract: Transformers have demonstrated remarkable performance in natural language processing and computer vision. However, existing vision Transformers struggle to learn from limited medical data and are unable to generalize on diverse medical image tasks. To tackle these challenges, we present MedFormer, a data-scalable Transformer designed for generalizable 3D medical image segmentation. Our approach incorporates three key elements: a desirable inductive bias, hierarchical modeling with linear-complexity attention, and multi-scale feature fusion that integrates spatial and semantic information globally. MedFormer can learn across tiny- to large-scale data without pre-training. Comprehensive experiments demonstrate MedFormer's potential as a versatile segmentation backbone, outperforming CNNs and vision Transformers on seven public datasets covering multiple modalities (e.g., CT and MRI) and various medical targets (e.g., healthy organs, diseased tissues, and tumors). We provide public access to our models and evaluation pipeline, offering solid baselines and unbiased comparisons to advance a wide range of downstream clinical applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yunhe Gao (19 papers)
  2. Mu Zhou (25 papers)
  3. Di Liu (107 papers)
  4. Zhennan Yan (10 papers)
  5. Shaoting Zhang (133 papers)
  6. Dimitris N. Metaxas (84 papers)
Citations (53)

Summary

We haven't generated a summary for this paper yet.