Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A-Eval: A Benchmark for Cross-Dataset Evaluation of Abdominal Multi-Organ Segmentation (2309.03906v1)

Published 7 Sep 2023 in eess.IV and cs.CV

Abstract: Although deep learning have revolutionized abdominal multi-organ segmentation, models often struggle with generalization due to training on small, specific datasets. With the recent emergence of large-scale datasets, some important questions arise: \textbf{Can models trained on these datasets generalize well on different ones? If yes/no, how to further improve their generalizability?} To address these questions, we introduce A-Eval, a benchmark for the cross-dataset Evaluation ('Eval') of Abdominal ('A') multi-organ segmentation. We employ training sets from four large-scale public datasets: FLARE22, AMOS, WORD, and TotalSegmentator, each providing extensive labels for abdominal multi-organ segmentation. For evaluation, we incorporate the validation sets from these datasets along with the training set from the BTCV dataset, forming a robust benchmark comprising five distinct datasets. We evaluate the generalizability of various models using the A-Eval benchmark, with a focus on diverse data usage scenarios: training on individual datasets independently, utilizing unlabeled data via pseudo-labeling, mixing different modalities, and joint training across all available datasets. Additionally, we explore the impact of model sizes on cross-dataset generalizability. Through these analyses, we underline the importance of effective data usage in enhancing models' generalization capabilities, offering valuable insights for assembling large-scale datasets and improving training strategies. The code and pre-trained models are available at \href{https://github.com/uni-medical/A-Eval}{https://github.com/uni-medical/A-Eval}.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (14)
  1. Ziyan Huang (18 papers)
  2. Zhongying Deng (25 papers)
  3. Jin Ye (38 papers)
  4. Haoyu Wang (310 papers)
  5. Yanzhou Su (26 papers)
  6. Tianbin Li (20 papers)
  7. Hui Sun (83 papers)
  8. Junlong Cheng (9 papers)
  9. Jianpin Chen (4 papers)
  10. Junjun He (78 papers)
  11. Yun Gu (44 papers)
  12. Shaoting Zhang (133 papers)
  13. Lixu Gu (9 papers)
  14. Yu Qiao (563 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.