Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Concatenate, Fine-tuning, Re-training: A SAM-enabled Framework for Semi-supervised 3D Medical Image Segmentation (2403.11229v1)

Published 17 Mar 2024 in cs.CV

Abstract: Segment Anything Model (SAM) fine-tuning has shown remarkable performance in medical image segmentation in a fully supervised manner, but requires precise annotations. To reduce the annotation cost and maintain satisfactory performance, in this work, we leverage the capabilities of SAM for establishing semi-supervised medical image segmentation models. Rethinking the requirements of effectiveness, efficiency, and compatibility, we propose a three-stage framework, i.e., Concatenate, Fine-tuning, and Re-training (CFR). The current fine-tuning approaches mostly involve 2D slice-wise fine-tuning that disregards the contextual information between adjacent slices. Our concatenation strategy mitigates the mismatch between natural and 3D medical images. The concatenated images are then used for fine-tuning SAM, providing robust initialization pseudo-labels. Afterwards, we train a 3D semi-supervised segmentation model while maintaining the same parameter size as the conventional segmenter such as V-Net. Our CFR framework is plug-and-play, and easily compatible with various popular semi-supervised methods. Extensive experiments validate that our CFR achieves significant improvements in both moderate annotation and scarce annotation across four datasets. In particular, CFR framework improves the Dice score of Mean Teacher from 29.68% to 74.40% with only one labeled data of LA dataset.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (54)
  1. X. Wang, X. Zhang, Y. Cao, W. Wang, C. Shen, and T. Huang, “Seggpt: Towards segmenting everything in context,” in ICCV, October 2023, pp. 1130–1140.
  2. X. Zou, J. Yang, H. Zhang, F. Li, L. Li, J. Wang, L. Wang, J. Gao, and Y. J. Lee, “Segment everything everywhere all at once,” NIPS, vol. 36, 2023.
  3. A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo et al., “Segment anything,” in ICCV, 2023, pp. 4015–4026.
  4. A. Khani, S. Asgari, A. Sanghi, A. M. Amiri, and G. Hamarneh, “SLiMe: Segment like me,” in ICLR, 2024.
  5. J. Wu, R. Fu, H. Fang, Y. Liu, Z. Wang, Y. Xu, Y. Jin, and T. Arbel, “Medical SAM Adapter: Adapting segment anything model for medical image segmentation,” arXiv preprint arXiv:2304.12620, 2023.
  6. K. Zhang and D. Liu, “Customized segment anything model for medical image segmentation,” arXiv preprint arXiv:2304.13785, 2023.
  7. Z. Xiong, Q. Xia, Z. Hu, N. Huang, C. Bian, Y. Zheng, S. Vesal, N. Ravikumar, A. Maier, X. Yang et al., “A global benchmark of algorithms for segmenting the left atrium from late gadolinium-enhanced cardiac magnetic resonance imaging,” Medical Image Analysis, vol. 67, p. 101832, 2021.
  8. D. Chen, Y. Bai, W. Shen, Q. Li, L. Yu, and Y. Wang, “MagicNet: Semi-supervised multi-organ segmentation via magic-cube partition and recovery,” in CVPR, 2023, pp. 23 869–23 878.
  9. Z. Xu, Y. Wang, D. Lu, X. Luo, J. Yan, Y. Zheng, and R. K.-y. Tong, “Ambiguity-selective consistency regularization for mean-teacher semi-supervised medical image segmentation,” Medical Image Analysis, vol. 88, p. 102880, 2023.
  10. Y. Bai, D. Chen, Q. Li, W. Shen, and Y. Wang, “Bidirectional copy-paste for semi-supervised medical image segmentation,” in CVPR, 2023, pp. 11 514–11 524.
  11. L. Wu, L. Fang, X. He, M. He, J. Ma, and Z. Zhong, “Querying labeled for unlabeled: Cross-image semantic consistency guided semi-supervised semantic segmentation,” IEEE TPAMI, 2023.
  12. L. Yu, S. Wang, X. Li, C.-W. Fu, and P.-A. Heng, “Uncertainty-aware self-ensembling model for semi-supervised 3D left atrium segmentation,” in MICCAI.   Springer, 2019, pp. 605–613.
  13. A. Tarvainen and H. Valpola, “Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results,” NIPS, vol. 30, 2017.
  14. Y. Ouali, C. Hudelot, and M. Tami, “Semi-supervised semantic segmentation with cross-consistency training,” in CVPR, 2020, pp. 12 674–12 684.
  15. X. Chen, Y. Yuan, G. Zeng, and J. Wang, “Semi-supervised semantic segmentation with cross pseudo supervision,” in CVPR, 2021, pp. 2613–2622.
  16. Y. Wang, B. Xiao, X. Bi, W. Li, and X. Gao, “MCF: Mutual correction framework for semi-supervised medical image segmentation,” in CVPR, 2023, pp. 15 651–15 660.
  17. J. Miao, C. Chen, F. Liu, H. Wei, and P.-A. Heng, “CauSSL: Causality-inspired semi-supervised learning for medical image segmentation,” in ICCV, 2023, pp. 21 426–21 437.
  18. F. Wu and X. Zhuang, “Minimizing estimated risks on unlabeled data: a new formulation for semi-supervised medical image segmentation,” IEEE TPAMI, vol. 45, no. 5, pp. 6021–6036, 2022.
  19. J. Ma, Y. He, F. Li, L. Han, C. You, and B. Wang, “Segment anything in medical images,” Nature Communications, vol. 15, no. 1, p. 654, 2024.
  20. X. Lin, Y. Xiang, L. Zhang, X. Yang, Z. Yan, and L. Yu, “SAMUS: Adapting segment anything model for clinically-friendly and generalizable ultrasound image segmentation,” arXiv preprint arXiv:2309.06824, 2023.
  21. J. Zhang, H. Peng, K. Wu, M. Liu, B. Xiao, J. Fu, and L. Yuan, “Minivit: Compressing vision transformers with weight multiplexing,” in CVPR, 2022, pp. 12 145–12 154.
  22. T. Chen, Z. Zhang, Y. Cheng, A. Awadallah, and Z. Wang, “The principle of diversity: Training stronger vision transformers calls for reducing all levels of redundancy,” in CVPR, 2022, pp. 12 020–12 030.
  23. A. Aghajanyan, L. Zettlemoyer, and S. Gupta, “Intrinsic dimensionality explains the effectiveness of language model fine-tuning,” arXiv preprint arXiv:2012.13255, 2020.
  24. L. Alzubaidi, M. Al-Amidie, A. Al-Asadi, A. J. Humaidi, O. Al-Shamma, M. A. Fadhel, J. Zhang, J. Santamaría, and Y. Duan, “Novel transfer learning approach for medical imaging with limited labeled data,” Cancers, vol. 13, no. 7, p. 1590, 2021.
  25. R. Jiao, Y. Zhang, L. Ding, B. Xue, J. Zhang, R. Cai, and C. Jin, “Learning with limited annotations: a survey on deep semi-supervised learning for medical image segmentation,” Computers in Biology and Medicine, p. 107840, 2023.
  26. F. Milletari, N. Navab, and S.-A. Ahmadi, “V-net: Fully convolutional neural networks for volumetric medical image segmentation,” in 2016 fourth International Conference on 3D vision (3DV).   IEEE, 2016, pp. 565–571.
  27. S. Liu, Z. Zeng, T. Ren, F. Li, H. Zhang, J. Yang, C. Li, J. Yang, H. Su, J. Zhu et al., “Grounding dino: Marrying dino with grounded pre-training for open-set object detection,” arXiv preprint arXiv:2303.05499, 2023.
  28. Y. Liu, J. Zhang, Z. She, A. Kheradmand, and M. Armand, “SAMM (Segment Any Medical Model): A 3d slicer integration to sam,” arXiv preprint arXiv:2304.05622, 2023.
  29. T. Wald, S. Roy, G. Koehler, N. Disch, M. R. Rokuss, J. Holzschuh, D. Zimmerer, and K. Maier-Hein, “Sam. md: Zero-shot medical image segmentation capabilities of the segment anything model,” in MIDL, short paper track, 2023.
  30. V. I. Butoi*, J. J. G. Ortiz*, T. Ma, M. R. Sabuncu, J. Guttag, and A. V. Dalca, “Universeg: Universal medical image segmentation,” ICCV, 2023.
  31. Z. Huang, H. Wang, Z. Deng, J. Ye, Y. Su, H. Sun, J. He, Y. Gu, L. Gu, S. Zhang et al., “Stu-net: Scalable and transferable medical image segmentation models empowered by large-scale supervised pre-training,” arXiv preprint arXiv:2304.06716, 2023.
  32. J. Cheng, J. Ye, Z. Deng, J. Chen, T. Li, H. Wang, Y. Su, Z. Huang, J. Chen, L. Jiang et al., “SAM-Med2D,” arXiv preprint arXiv:2308.16184, 2023.
  33. H. Wang, S. Guo, J. Ye, Z. Deng, J. Cheng, T. Li, J. Chen, Y. Su, Z. Huang, Y. Shen et al., “SAM-Med3D,” arXiv preprint arXiv:2310.15161, 2023.
  34. Y. Huang, X. Yang, L. Liu, H. Zhou, A. Chang, X. Zhou, R. Chen, J. Yu, J. Chen, C. Chen et al., “Segment anything model for medical images?” Medical Image Analysis, vol. 92, p. 103061, 2024.
  35. M. A. Mazurowski, H. Dong, H. Gu, J. Yang, N. Konz, and Y. Zhang, “Segment anything model for medical image analysis: an experimental study,” Medical Image Analysis, vol. 89, p. 102918, 2023.
  36. S. Gong, Y. Zhong, W. Ma, J. Li, Z. Wang, J. Zhang, P.-A. Heng, and Q. Dou, “3dsam-adapter: Holistic adaptation of sam from 2d to 3d for promptable medical image segmentation,” arXiv preprint arXiv:2306.13465, 2023.
  37. E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, and W. Chen, “LoRA: Low-rank adaptation of large language models,” in ICLR, 2022.
  38. D.-H. Lee et al., “Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks,” in Workshop on challenges in representation learning, ICML, vol. 3, no. 2.   Atlanta, 2013, p. 896.
  39. W. Bai, O. Oktay, M. Sinclair, H. Suzuki, M. Rajchl, G. Tarroni, B. Glocker, A. King, P. M. Matthews, and D. Rueckert, “Semi-supervised learning for network-based cardiac MR image segmentation,” in MICCAI.   Springer, 2017, pp. 253–260.
  40. K. Chaitanya, E. Erdil, N. Karani, and E. Konukoglu, “Local contrastive loss with pseudo-label based self-training for semi-supervised medical image segmentation,” Medical Image Analysis, vol. 87, p. 102792, 2023.
  41. M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results,” http://www.pascal-network.org/challenges/VOC/voc2012/workshop/index.html.
  42. N. Houlsby, A. Giurgiu, S. Jastrzebski, B. Morrone, Q. De Laroussilhe, A. Gesmundo, M. Attariyan, and S. Gelly, “Parameter-efficient transfer learning for NLP,” in ICML.   PMLR, 2019, pp. 2790–2799.
  43. X. Luo, J. Chen, T. Song, and G. Wang, “Semi-supervised medical image segmentation through dual-task consistency,” in AAAI, vol. 35, no. 10, 2021, pp. 8801–8809.
  44. V. Verma, K. Kawaguchi, A. Lamb, J. Kannala, A. Solin, Y. Bengio, and D. Lopez-Paz, “Interpolation consistency training for semi-supervised learning,” Neural Networks, vol. 145, pp. 90–106, 2022.
  45. Z. Xu, Y. Wang, D. Lu, L. Yu, J. Yan, J. Luo, K. Ma, Y. Zheng, and R. K.-y. Tong, “All-around real label supervision: Cyclic prototype consistency learning for semi-supervised medical image segmentation,” IEEE JBHI, vol. 26, no. 7, pp. 3174–3184, 2022.
  46. X. Luo, G. Wang, W. Liao, J. Chen, T. Song, Y. Chen, S. Zhang, D. N. Metaxas, and S. Zhang, “Semi-supervised medical image segmentation via uncertainty rectified pyramid consistency,” Medical Image Analysis, vol. 80, p. 102517, 2022.
  47. Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D U-Net: learning dense volumetric segmentation from sparse annotation,” in MICCAI.   Springer, 2016, pp. 424–432.
  48. B. H. Menze, A. Jakab, S. Bauer, J. Kalpathy-Cramer, K. Farahani, J. Kirby, Y. Burren, N. Porz, J. Slotboom, R. Wiest et al., “The multimodal brain tumor image segmentation benchmark (BRATS),” IEEE TMI, vol. 34, no. 10, pp. 1993–2024, 2014.
  49. S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J. S. Kirby, J. B. Freymann, K. Farahani, and C. Davatzikos, “Advancing the cancer genome atlas glioma MRI collections with expert segmentation labels and radiomic features,” Scientific data, vol. 4, no. 1, pp. 1–13, 2017.
  50. S. Bakas, M. Reyes, A. Jakab, S. Bauer, M. Rempfler, A. Crimi, R. T. Shinohara, C. Berger, S. M. Ha, M. Rozycki et al., “Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge,” arXiv preprint arXiv:1811.02629, 2018.
  51. B. Landman, Z. Xu, J. Igelsias, M. Styner, T. Langerak, and A. Klein, “Miccai multi-atlas labeling beyond the cranial vault–workshop and challenge,” in Proc. MICCAI Multi-Atlas Labeling Beyond Cranial Vault—Workshop Challenge, vol. 5, 2015, p. 12.
  52. E. Gibson, F. Giganti, Y. Hu, E. Bonmati, S. Bandula, K. Gurusamy, B. Davidson, S. P. Pereira, M. J. Clarkson, and D. C. Barratt, “Automatic multi-organ segmentation on abdominal ct with dense v-networks,” IEEE TMI, vol. 37, no. 8, pp. 1822–1834, 2018.
  53. Y. Wu, Z. Wu, Q. Wu, Z. Ge, and J. Cai, “Exploring smoothness and class-separation for semi-supervised medical image segmentation,” in MICCAI.   Springer, 2022, pp. 34–43.
  54. J. Liu, C. Desrosiers, and Y. Zhou, “Semi-supervised medical image segmentation using cross-model pseudo-supervision with shape awareness and local context constraints,” in MICCAI.   Springer, 2022, pp. 140–150.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Shumeng Li (5 papers)
  2. Lei Qi (84 papers)
  3. Qian Yu (116 papers)
  4. Jing Huo (45 papers)
  5. Yinghuan Shi (79 papers)
  6. Yang Gao (761 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.