Papers
Topics
Authors
Recent
Search
2000 character limit reached

Reliable Source Approximation: Source-Free Unsupervised Domain Adaptation for Vestibular Schwannoma MRI Segmentation

Published 25 May 2024 in eess.IV and cs.CV | (2405.16102v1)

Abstract: Source-Free Unsupervised Domain Adaptation (SFUDA) has recently become a focus in the medical image domain adaptation, as it only utilizes the source model and does not require annotated target data. However, current SFUDA approaches cannot tackle the complex segmentation task across different MRI sequences, such as the vestibular schwannoma segmentation. To address this problem, we proposed Reliable Source Approximation (RSA), which can generate source-like and structure-preserved images from the target domain for updating model parameters and adapting domain shifts. Specifically, RSA deploys a conditional diffusion model to generate multiple source-like images under the guidance of varying edges of one target image. An uncertainty estimation module is then introduced to predict and refine reliable pseudo labels of generated images, and the prediction consistency is developed to select the most reliable generations. Subsequently, all reliable generated images and their pseudo labels are utilized to update the model. Our RSA is validated on vestibular schwannoma segmentation across multi-modality MRI. The experimental results demonstrate that RSA consistently improves domain adaptation performance over other state-of-the-art SFUDA methods. Code is available at https://github.com/zenghy96/Reliable-Source-Approximation.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (27)
  1. A. Esteva, K. Chou, S. Yeung, N. Naik, A. Madani, A. Mottaghi, Y. Liu, E. Topol, J. Dean, and R. Socher, “Deep learning-enabled medical computer vision,” npj Digit. Med., vol. 4, no. 1, pp. 1–9, Jan. 2021.
  2. Y. Ganin et al., “Domain-adversarial training of neural networks,” Journal of Machine Learning Research, vol. 17, no. 1, pp. 2096–2030, 2016.
  3. M. Long, Z. Cao, J. Wang, and M. I. Jordan, “Conditional Adversarial Domain Adaptation,” Dec. 2018.
  4. J. Liang, D. Hu, and J. Feng, “Do We Really Need to Access the Source Data? Source Hypothesis Transfer for Unsupervised Domain Adaptation,” Jun. 2021.
  5. M. Bateson, H. Kervadec, J. Dolz, H. Lombaert, and I. Ben Ayed, “Source-relaxed domain adaptation for image segmentation,” in Medical Image Computing and Computer Assisted Intervention – MICCAI 2020.   Cham: Springer International Publishing, 2020, pp. 490–499.
  6. M. Bateson, H. Kervadec, J. Dolz, H. Lombaert, and B. AyedIsmail, “Source-free domain adaptation for image segmentation,” Medical Image Analysis, vol. 82, p. 102617, Nov. 2022.
  7. D. Wang, E. Shelhamer, S. Liu, B. Olshausen, and T. Darrell, “Tent: Fully Test-time Adaptation by Entropy Minimization,” Mar. 2021.
  8. S. Niu, J. Wu, Y. Zhang, Y. Chen, S. Zheng, P. Zhao, and M. Tan, “Efficient Test-Time Model Adaptation without Forgetting,” in Proceedings of the 39th International Conference on Machine Learning.   PMLR, Jun. 2022, pp. 16 888–16 905.
  9. S. Niu, J. Wu, Y. Zhang, Z. Wen, Y. Chen, P. Zhao, and M. Tan, “Towards Stable Test-Time Adaptation in Dynamic Wild World,” Feb. 2023.
  10. L. Zhou, M. Ye, and S. Xiao, “Domain adaptation based on source category prototypes,” Neural Computing and Applications, vol. 34, no. 23, pp. 21 191–21 203, 2022.
  11. V. Prabhu, S. Khare, D. Kartik, and J. Hoffman, “Augco: Augmentation consistency-guided self-training for sourcefree domain adaptive semantic segmentation,” arXiv preprint arXiv:2107.10140, 2022.
  12. D. Kothandaraman, R. Chandra, and D. Manocha, “Ss-sfda: Selfsupervised source-free domain adaptation for road segmentation in hazardous environments,” in Proc. ICCV Workshops, 2021.
  13. A. Dasgupta, C. Jawahar, and K. Alahari, “Overcoming label noise for source-free unsupervised video domain adaptation,” in Indian Conference on Computer Vision, Graphics, and Image Processing (ICVGIP).   ACM, 2022.
  14. L. Tian, L. Zhou, H. Zhang, Z. Wang, and M. Ye, “Robust selfsupervised learning for source-free domain adaptation,” Signal, Image and Video Processing, pp. 1–9, 2023.
  15. Y. Hou and L. Zheng, “Visualizing adapted knowledge in domain transfer,” in Proc. CVPR, 2021, pp. 13 824–13 833.
  16. Q. Wang, O. Fink, L. Van Gool, and D. Dai, “Continual Test-Time Domain Adaptation,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).   New Orleans, LA, USA: IEEE, Jun. 2022, pp. 7191–7201.
  17. J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks,” Aug. 2020.
  18. J. Cai et al., “Towards cross-modal organ translation and segmentation: A cycle-and shape-consistent generative adversarial network,” Med. Image Anal., vol. 52, pp. 174–184, 2019.
  19. J. Jiang, J. Hu, N. Tyagi, A. Rimner, S. L. Berry, J. O. Deasy, and H. Veeraraghavan, “Integrating cross-modality hallucinated mri with ct to aid mediastinal lung tumor segmentation,” in Proc. Int. Conf. Med. Image Comput. Comput.- Assist. Intervention, 2019, pp. 221–229.
  20. J. Jiang et al., “Tumor-aware, adversarial domain adaptation from ct to mri for lung cancer segmentation,” in Proc. Int. Conf. Med. Image Comput. Comput.- Assist. Intervention, 2018, pp. 777–785.
  21. C. Yang, X. Guo, Z. Chen, and Y. Yuan, “Source free domain adaptation for medical image segmentation with fourier style mining,” Medical Image Analysis, vol. 79, p. 102457, Jul. 2022.
  22. J. Ho, A. Jain, and P. Abbeel, “Denoising Diffusion Probabilistic Models,” Dec. 2020.
  23. L. Zhang and M. Agrawala, “Adding Conditional Control to Text-to-Image Diffusion Models,” Feb. 2023.
  24. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, Eds.   Cham: Springer International Publishing, 2015, vol. 9351, pp. 234–241.
  25. A. Amini, W. Schwarting, A. Soleimany, and D. Rus, “Deep Evidential Regression,” in Advances in Neural Information Processing Systems, vol. 33.   Curran Associates, Inc., 2020, pp. 14 927–14 937.
  26. J. Shapey, G. Wang, R. Dorent, A. Dimitriadis, W. Li, I. Paddick, N. Kitchen, S. Bisdas, S. R. Saeed, S. Ourselin, R. Bradford, and T. Vercauteren, “An artificial intelligence framework for automatic segmentation and volumetry of vestibular schwannomas from contrast-enhanced T1-weighted and high-resolution T2-weighted MRI,” Journal of Neurosurgery, vol. 134, no. 1, pp. 171–179, Dec. 2019.
  27. J. Song, C. Meng, and S. Ermon, “Denoising Diffusion Implicit Models,” Oct. 2022.
Citations (2)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 1 like about this paper.