A mirror-Unet architecture for PET/CT lesion segmentation (2309.13398v1)
Abstract: Automatic lesion detection and segmentation from [${}{18}$F]FDG PET/CT scans is a challenging task, due to the diversity of shapes, sizes, FDG uptake and location they may present, besides the fact that physiological uptake is also present on healthy tissues. In this work, we propose a deep learning method aimed at the segmentation of oncologic lesions, based on a combination of two UNet-3D branches. First, one of the network's branches is trained to segment a group of tissues from CT images. The other branch is trained to segment the lesions from PET images, combining on the bottleneck the embedded information of CT branch, already trained. We trained and validated our networks on the AutoPET MICCAI 2023 Challenge dataset. Our code is available at: https://github.com/yrotstein/AutoPET2023_Mv1.
- S. Gatidis, T. Hepp, M. Früh, C. La Fougère, K. Nikolaou, C. Pfannenberg, B. Schölkopf, T. Küstner, C. Cyran, and D. Rubin, “A whole-body FDG-PET/CT dataset with manually annotated tumor lesions,” Scientific Data, vol. 9, 2022.
- S. Gatidis and T. Küstner, “A whole-body FDG-PET/CT dataset with manually annotated tumor lesions (FDG-PET-CT-Lesions) [Dataset],” 2022.
- L. K. S. Sundar, J. Yu, O. Muzik, O. C. Kulterer, B. Fueger, D. Kifjak, T. Nakuz, H. M. Shin, A. K. Sima, D. Kitzmantl, R. D. Badawi, L. Nardo, S. R. Cherry, B. A. Spencer, M. Hacker, and T. Beyer, “Fully automated, semantic segmentation of whole-body 18F-FDG PET/CT images based on data-centric artificial intelligence,” Journal of Nuclear Medicine, vol. 63, no. 12, pp. 1941–1948, 2022.
- J. Wasserthal, H.-C. Breit, M. T. Meyer, M. Pradella, D. Hinck, A. W. Sauter, T. Heye, D. T. Boll, J. Cyriac, S. Yang, M. Bach, and M. Segeroth, “Totalsegmentator: Robust segmentation of 104 anatomic structures in CT images,” Radiology: Artificial Intelligence, vol. 5, no. 5, p. e230024, 2023.
- O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015 (N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, eds.), (Cham), pp. 234–241, Springer International Publishing, 2015.
- Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D U-Net: Learning dense volumetric segmentation from sparse annotation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2016 (S. Ourselin, L. Joskowicz, M. R. Sabuncu, G. Unal, and W. Wells, eds.), (Cham), pp. 424–432, Springer International Publishing, 2016.
- Z. Marinov, S. Reiß, D. Kersting, J. Kleesiek, and R. Stiefelhagen, “Mirror U-Net: Marrying multimodal fission with multi-task learning for semantic segmentation in medical imaging,” 2023.
- F. Pérez-García, R. Sparks, and S. Ourselin, “Torchio: A python library for efficient loading, preprocessing, augmentation and patch-based sampling of medical images in deep learning,” Computer Methods and Programs in Biomedicine, vol. 208, p. 106236, 2021.
- F. Isensee, P. F. Jaeger, S. Kohl, J. Petersen, and K. H. Maier-Hein, “nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation,” Nature Methods, vol. 18, pp. 203–211, 2021.
- P. Izmailov, D. Podoprikhin, T. Garipov, D. P. Vetrov, and A. G. Wilson, “Averaging weights leads to wider optima and better generalization,” in Conference on Uncertainty in Artificial Intelligence, 2018.