Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cross-modal tumor segmentation using generative blending augmentation and self training (2304.01705v2)

Published 4 Apr 2023 in eess.IV and cs.CV

Abstract: \textit{Objectives}: Data scarcity and domain shifts lead to biased training sets that do not accurately represent deployment conditions. A related practical problem is cross-modal image segmentation, where the objective is to segment unlabelled images using previously labelled datasets from other imaging modalities. \textit{Methods}: We propose a cross-modal segmentation method based on conventional image synthesis boosted by a new data augmentation technique called Generative Blending Augmentation (GBA). GBA leverages a SinGAN model to learn representative generative features from a single training image to diversify realistically tumor appearances. This way, we compensate for image synthesis errors, subsequently improving the generalization power of a downstream segmentation model. The proposed augmentation is further combined to an iterative self-training procedure leveraging pseudo labels at each pass. \textit{Results}: The proposed solution ranked first for vestibular schwannoma (VS) segmentation during the validation and test phases of the MICCAI CrossMoDA 2022 challenge, with best mean Dice similarity and average symmetric surface distance measures. \textit{Conclusion and significance}: Local contrast alteration of tumor appearances and iterative self-training with pseudo labels are likely to lead to performance improvements in a variety of segmentation contexts.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (56)
  1. H. Guan and M. Liu, “Domain adaptation for medical image analysis: a survey,” IEEE Transactions on Biomedical Engineering, vol. 69, no. 3, pp. 1173–1185, 2021.
  2. D. C. de Castro, I. Walker, and B. Glocker, “Causality matters in medical imaging,” Nature Communications, 2019.
  3. ——, “CrossMoDA 2021 challenge: Benchmark of cross-modality domain adaptation techniques for vestibular schwannoma and cochlea segmentation,” Medical Image Analysis, vol. 83, p. 102628, 2023.
  4. E. A. Vokurka, A. Herwadkar, N. A. Thacker, R. T. Ramsden, and A. Jackson, “Using bayesian tissue classification to improve the accuracy of vestibular schwannoma volume and growth measurement,” American journal of neuroradiology, vol. 23, no. 3, pp. 459–467, 2002.
  5. J. Shapey, A. Kujawa, R. Dorent, G. Wang, A. Dimitriadis, D. Grishchuk, I. Paddick, N. Kitchen, R. Bradford, S. R. Saeed et al., “Segmentation of vestibular schwannoma from MRI, an open annotated dataset and baseline algorithm,” Scientific Data, vol. 8, no. 1, pp. 1–6, 2021.
  6. D. H. Coelho, Y. Tang, B. Suddarth, and M. Mamdani, “MRI surveillance of vestibular schwannomas without contrast enhancement: clinical and economic evaluation,” The Laryngoscope, vol. 128, no. 1, pp. 202–209, 2018.
  7. T. R. Shaham, T. Dekel, and T. Michaeli, “Singan: Learning a generative model from a single natural image,” in ICCV, 2019, pp. 4570–4580.
  8. N. Tajbakhsh, L. Jeyaseelan, Q. Li, J. N. Chiang, Z. Wu, and X. Ding, “Embracing imperfect datasets: A review of deep learning solutions for medical image segmentation,” Medical Image Analysis, vol. 63, p. 101693, 2020.
  9. G. Wang, W. Li, M. A. Zuluaga, R. Pratt, P. A. Patel, M. Aertsen, T. Doel, A. L. David, J. Deprest, S. Ourselin et al., “Interactive medical image segmentation using deep learning with image-specific fine tuning,” IEEE transactions on medical imaging, vol. 37, no. 7, pp. 1562–1573, 2018.
  10. C. S. Perone, P. Ballester, R. C. Barros, and J. Cohen-Adad, “Unsupervised domain adaptation for medical imaging segmentation with self-ensembling,” NeuroImage, vol. 194, pp. 1–11, 2019.
  11. Y. Zou, Z. Yu, B. Kumar, and J. Wang, “Unsupervised domain adaptation for semantic segmentation via class-balanced self-training,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 289–305.
  12. C. Hognon, F. Tixier, O. Gallinato, T. Colin, D. Visvikis, and V. Jaouen, “Standardization of multicentric image datasets with generative adversarial networks,” in IEEE Nuclear Science Symposium and Medical Imaging Conference 2019, 2019.
  13. F. Tixier, V. Jaouen, C. Hognon, O. Gallinato, T. Colin, and D. Visvikis, “Evaluation of conventional and deep learning based image harmonization methods in radiomics studies,” Physics in Medicine & Biology, vol. 66, no. 24, p. 245009, 2021.
  14. C. Hognon, P.-H. Conze, V. Bourbonne, O. Gallinato, T. Colin, V. Jaouen, and D. Visvikis, “Contrastive image adaptation for acquisition shift reduction in medical imaging,” Artificial Intelligence in Medicine, 2023.
  15. H. Shin, H. Kim, S. Kim, Y. Jun, T. Eo, and D. Hwang, “COSMOS: Cross-Modality Unsupervised Domain Adaptation for 3D Medical Image Segmentation based on Target-aware Domain Translation and Iterative Self-Training,” arXiv preprint arXiv:2203.16557, 2022.
  16. S. Fu, Y. Lu, Y. Wang, Y. Zhou, W. Shen, E. Fishman, and A. Yuille, “Domain adaptive relational reasoning for 3d multi-organ segmentation,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2020.   Springer, 2020, pp. 656–666.
  17. Z. Zhao, K. Xu, S. Li, Z. Zeng, and C. Guan, “MT-UDA: Towards unsupervised cross-modality medical image segmentation with limited source labels,” in Medical Image Computing and Computer Assisted Intervention–MICCAI.   Springer, 2021, pp. 293–303.
  18. G. Zeng, T. D. Lerch, F. Schmaranzer, G. Zheng, J. Burger, K. Gerber, M. Tannast, K. Siebenrock, and N. Gerber, “Semantic consistent unsupervised domain adaptation for cross-modality medical image segmentation,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2021.   Springer, 2021, pp. 201–210.
  19. J. Hoffman, E. Tzeng, T. Park, J.-Y. Zhu, P. Isola, K. Saenko, A. Efros, and T. Darrell, “Cycada: Cycle-consistent adversarial domain adaptation,” in International conference on machine learning.   Pmlr, 2018, pp. 1989–1998.
  20. Z. Zhao, K. Xu, H. Z. Yeo, X. Yang, and C. Guan, “Ms-mt: Multi-scale mean teacher with contrastive unpaired translation for cross-modality vestibular schwannoma and cochlea segmentation,” arXiv preprint arXiv:2303.15826, 2023.
  21. Y. Huo, Z. Xu, H. Moon, S. Bao, A. Assad, T. K. Moyo, M. R. Savona, R. G. Abramson, and B. A. Landman, “Synseg-net: Synthetic segmentation without target modality ground truth,” IEEE transactions on medical imaging, vol. 38, no. 4, pp. 1016–1025, 2018.
  22. C. Chen, Q. Dou, H. Chen, J. Qin, and P. A. Heng, “Unsupervised bidirectional cross-modality adaptation via deeply synergistic image and feature alignment for medical image segmentation,” IEEE transactions on medical imaging, vol. 39, no. 7, pp. 2494–2505, 2020.
  23. M. Özbey, S. U. Dar, H. A. Bedel, O. Dalmaz, Ş. Özturk, A. Güngör, and T. Çukur, “Unsupervised medical image translation with adversarial diffusion models,” arXiv preprint arXiv:2207.08208, 2022.
  24. Z. Zhao, F. Zhou, K. Xu, Z. Zeng, C. Guan, and S. K. Zhou, “LE-UDA: Label-efficient unsupervised domain adaptation for medical image segmentation,” IEEE Transactions on Medical Imaging, 2022.
  25. J. Jiang, Y.-C. Hu, N. Tyagi, P. Zhang, A. Rimner, G. S. Mageras, J. O. Deasy, and H. Veeraraghavan, “Tumor-aware, adversarial domain adaptation from CT to MRI for lung cancer segmentation,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2018.   Springer, 2018, pp. 777–785.
  26. A. S. Fard, D. C. Reutens, and V. Vegh, “From CNNs to GANs for cross-modality medical image estimation,” Computers in Biology and Medicine, p. 105556, 2022.
  27. J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2223–2232.
  28. T. Park, A. A. Efros, R. Zhang, and J.-Y. Zhu, “Contrastive learning for unpaired image-to-image translation,” in European conference on computer vision.   Springer, 2020, pp. 319–345.
  29. R. Chen, W. Huang, B. Huang, F. Sun, and B. Fang, “Reusing discriminators for encoding: Towards unsupervised image-to-image translation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 8168–8177.
  30. J. P. Cohen, M. Luck, and S. Honari, “Distribution matching losses can hallucinate features in medical image translation,” in Medical Image Computing and Computer Assisted Intervention–MICCAI.   Springer, 2018, pp. 529–536.
  31. ——, “How to Cure Cancer (in images) with Unpaired Image Translation,” in MIDL 2018.
  32. L. Huang, Z. Zhou, Y. Guo, and Y. Wang, “A stability-enhanced CycleGAN for effective domain transformation of unpaired ultrasound images,” Biomedical Signal Processing and Control, vol. 77, p. 103831, 2022.
  33. J. Liu, J. He, Y. Xie, W. Gui, Z. Tang, T. Ma, J. He, and J. P. Niyoyita, “Illumination-invariant flotation froth color measuring via Wasserstein distance-based CycleGAN with structure-preserving constraint,” IEEE transactions on cybernetics, vol. 51, no. 2, pp. 839–852, 2020.
  34. Y. Chen, X.-H. Yang, Z. Wei, A. A. Heidari, N. Zheng, Z. Li, H. Chen, H. Hu, Q. Zhou, and Q. Guan, “Generative adversarial networks in medical image augmentation: a review,” Computers in Biology and Medicine, p. 105382, 2022.
  35. C. Shorten and T. M. Khoshgoftaar, “A survey on image data augmentation for deep learning,” Journal of big data, vol. 6, no. 1, pp. 1–48, 2019.
  36. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention.   Springer, 2015, pp. 234–241.
  37. H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, “mixup: Beyond empirical risk minimization,” arXiv preprint arXiv:1710.09412, 2017.
  38. J. Cao, M. Luo, J. Yu, M.-H. Yang, and R. He, “ScoreMix: A Scalable Augmentation Strategy for Training GANs with Limited Data,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
  39. E. Panfilov, A. Tiulpin, S. Klein, M. T. Nieminen, and S. Saarakkala, “Improving robustness of deep learning based knee MRI segmentation: Mixup and adversarial domain adaptation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, 2019, pp. 0–0.
  40. J. Nalepa, M. Marcinkiewicz, and M. Kawulok, “Data augmentation for brain-tumor segmentation: a review,” Frontiers in computational neuroscience, vol. 13, p. 83, 2019.
  41. A. Zhao, G. Balakrishnan, F. Durand, J. V. Guttag, and A. V. Dalca, “Data augmentation using learned transformations for one-shot medical image segmentation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 8543–8553.
  42. P. Chlap, H. Min, N. Vandenberg, J. Dowling, L. Holloway, and A. Haworth, “A review of medical image data augmentation techniques for deep learning applications,” Journal of Medical Imaging and Radiation Oncology, vol. 65, no. 5, pp. 545–563, 2021.
  43. A. Kebaili, J. Lapuyade-Lahorgue, and S. Ruan, “Deep learning approaches for data augmentation in medical imaging: A review,” Journal of Imaging, vol. 9, no. 4, p. 81, 2023.
  44. C. Chen, C. Qin, H. Qiu, C. Ouyang, S. Wang, L. Chen, G. Tarroni, W. Bai, and D. Rueckert, “Realistic adversarial data augmentation for MR image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.   Springer, 2020, pp. 667–677.
  45. L. S. Hesse, G. Kuling, M. Veta, and A. L. Martel, “Intensity augmentation to improve generalizability of breast segmentation across different mri scan protocols,” IEEE Transactions on Biomedical Engineering, vol. 68, no. 3, pp. 759–770, 2020.
  46. R. Osuala, K. Kushibar, L. Garrucho, A. Linardos, Z. Szafranowska, S. Klein, B. Glocker, O. Diaz, and K. Lekadir, “Data synthesis and adversarial networks: A review and meta-analysis in cancer imaging,” Medical Image Analysis, p. 102704, 2022.
  47. S. Kim, B. Kim, and H. Park, “Synthesis of brain tumor multicontrast MR images for improved data augmentation,” Medical Physics, vol. 48, no. 5, pp. 2185–2198, 2021.
  48. Q. Li, Z. Yu, Y. Wang, and H. Zheng, “TumorGAN: A multi-modal data augmentation framework for brain tumor segmentation,” Sensors, vol. 20, no. 15, p. 4203, 2020.
  49. G. Sallé, V. Bourbonne, P.-H. Conze, N. Boussion, J. Bert, D. Visvikis, and V. Jaouen, “Tumor blending augmentation using one-shot generative learning for brain CT tumor segmentation,” IEEE NSS-MIC, 2022.
  50. G. Sallé, P.-H. Conze, N. Boussion, J. Bert, D. Visvikis, and V. Jaouen, “Fake tumor insertion using one-shot generative learning for a cross-modal image segmentation,” IEEE NSS-MIC, 2021.
  51. Q. Xie, M.-T. Luong, E. Hovy, and Q. V. Le, “Self-training with noisy student improves imagenet classification,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 10 687–10 698.
  52. C. Li and M. Wand, “Precomputed real-time texture synthesis with markovian generative adversarial networks,” in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part III 14.   Springer, 2016, pp. 702–716.
  53. J. W. Choi, “Using out-of-the-box frameworks for unpaired image translation and image segmentation for the crossmoda challenge,” arXiv e-prints, 2021.
  54. P. Van Cittert, “Zum einfluß der spaltbreite auf die intensitätsverteilung in spektrallinien,” Zeitschrift für Physik, pp. 547–563, 1930.
  55. F. Isensee, P. F. Jaeger, S. A. Kohl, J. Petersen, and K. H. Maier-Hein, “nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation,” Nature methods, vol. 18, no. 2, pp. 203–211, 2021.
  56. H. Liu, Z. Xu, R. Gao, H. Li, J. Wang, G. Chabin, I. Oguz, and S. Grbic, “COSST: Multi-organ Segmentation with Partially Labeled Datasets Using Comprehensive Supervisions and Self-training,” arXiv preprint arXiv:2304.14030, 2023.
Citations (7)

Summary

We haven't generated a summary for this paper yet.