Large Intestine 3D Shape Refinement Using Point Diffusion Models for Digital Phantom Generation (2309.08289v2)
Abstract: Accurate 3D modeling of human organs plays a crucial role in building computational phantoms for virtual imaging trials. However, generating anatomically plausible reconstructions of organ surfaces from computed tomography scans remains challenging for many structures in the human body. This challenge is particularly evident when dealing with the large intestine. In this study, we leverage recent advancements in geometric deep learning and denoising diffusion probabilistic models to refine the segmentation results of the large intestine. We begin by representing the organ as point clouds sampled from the surface of the 3D segmentation mask. Subsequently, we employ a hierarchical variational autoencoder to obtain global and local latent representations of the organ's shape. We train two conditional denoising diffusion models in the hierarchical latent space to perform shape refinement. To further enhance our method, we incorporate a state-of-the-art surface reconstruction model, allowing us to generate smooth meshes from the obtained complete point clouds. Experimental results demonstrate the effectiveness of our approach in capturing both the global distribution of the organ's shape and its fine details. Our complete refinement pipeline demonstrates remarkable enhancements in surface representation compared to the initial segmentation, reducing the Chamfer distance by 70%, the Hausdorff distance by 32%, and the Earth Mover's distance by 6%. By combining geometric deep learning, denoising diffusion models, and advanced surface reconstruction techniques, our proposed method offers a promising solution for accurately modeling the large intestine's surface and can easily be extended to other anatomical structures.
- W. Segars, J. Bond, J. Frush, S. Hon, C. Eckersley, C. H. Williams, J. Feng, D. J. Tward, J. Ratnanather, M. Miller et al., “Population of anatomically variable 4d xcat adult phantoms for imaging research and optimization,” Medical physics, vol. 40, no. 4, p. 043701, 2013.
- J. Y. Hesterman, S. D. Kost, R. W. Holt, H. Dobson, A. Verma, and P. D. Mozley, “Three-dimensional dosimetry for radiation safety estimates from intrathecal administration,” Journal of Nuclear Medicine, vol. 58, no. 10, pp. 1672–1678, 2017.
- M. Wang, N. Guo, G. Hu, G. El Fakhri, H. Zhang, and Q. Li, “A novel approach to assess the treatment response using gaussian random field in pet,” Medical Physics, vol. 43, no. 2, pp. 833–842, 2016.
- W. P. Segars, G. Sturgeon, S. Mendonca, J. Grimes, and B. M. Tsui, “4d xcat phantom for multimodality imaging research,” Medical physics, vol. 37, no. 9, pp. 4902–4915, 2010.
- C. Lee, D. Lodwick, D. Hasenauer, J. L. Williams, C. Lee, and W. E. Bolch, “Hybrid computational phantoms of the male and female newborn patient: Nurbs-based whole-body models,” Physics in Medicine & Biology, vol. 52, no. 12, p. 3309, 2007.
- Y. Liu, Y. Lei, Y. Fu, T. Wang, X. Tang, X. Jiang, W. J. Curran, T. Liu, P. Patel, and X. Yang, “Ct-based multi-organ segmentation using a 3d self-attention u-net network for pancreatic radiotherapy,” Medical physics, vol. 47, no. 9, pp. 4316–4324, 2020.
- A. D. Weston, P. Korfiatis, K. A. Philbrick, G. M. Conte, P. Kostandy, T. Sakinis, A. Zeinoddini, A. Boonrod, M. Moynagh, N. Takahashi et al., “Complete abdomen and pelvis segmentation using u-net variant architecture,” Medical physics, vol. 47, no. 11, pp. 5609–5618, 2020.
- J. Wasserthal, M. Meyer, H.-C. Breit, J. Cyriac, S. Yang, and M. Segeroth, “Totalsegmentator: robust segmentation of 104 anatomical structures in ct images,” arXiv preprint arXiv:2208.05868, 2022.
- J. J. Cerrolaza, M. L. Picazo, L. Humbert, Y. Sato, D. Rueckert, M. Á. G. Ballester, and M. G. Linguraru, “Computational anatomy for multi-organ analysis in medical imaging: A review,” Medical image analysis, vol. 56, pp. 44–67, 2019.
- C. Wang, Z. Cui, J. Yang, M. Han, G. Carneiro, and D. Shen, “Bowelnet: Joint semantic-geometric ensemble learning for bowel segmentation from both partially and fully labeled ct images,” IEEE Transactions on Medical Imaging, 2022.
- O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18. Springer, 2015, pp. 234–241.
- J. Yang, U. Wickramasinghe, B. Ni, and P. Fua, “Implicitatlas: learning deformable shape templates in medical imaging,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 15 861–15 871.
- A. Raju, S. Miao, D. Jin, L. Lu, J. Huang, and A. P. Harrison, “Deep implicit statistical shape models for 3d medical image delineation,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 2, 2022, pp. 2135–2143.
- L. Bai, Y. Zhao, M. Elhousni, and X. Huang, “Depthnet: Real-time lidar point cloud depth completion for autonomous vehicles,” IEEE Access, vol. 8, pp. 227 825–227 833, 2020.
- J. Varley, C. DeChant, A. Richardson, J. Ruales, and P. Allen, “Shape completion enabled robotic grasping,” in 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, 2017, pp. 2442–2447.
- C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “Pointnet: Deep learning on point sets for 3d classification and segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 652–660.
- A. Jana, H. M. Subhash, and D. Metaxas, “Automatic tooth segmentation from 3d dental model using deep learning: a quantitative analysis of what can be learnt from a single 3d dental model,” in 18th International Symposium on Medical Information Processing and Analysis, vol. 12567. SPIE, 2023, pp. 42–51.
- F. Balsiger, Y. Soom, O. Scheidegger, and M. Reyes, “Learning shape representation on sparse point clouds for volumetric image segmentation,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part II 22. Springer, 2019, pp. 273–281.
- Y. Li, R. Bu, M. Sun, W. Wu, X. Di, and B. Chen, “Pointcnn: Convolution on x-transformed points,” Advances in neural information processing systems, vol. 31, 2018.
- J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic models,” Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851, 2020.
- J. Zhang, S. Jayasuriya, and V. Berisha, “Restoring degraded speech via a modified diffusion model,” arXiv preprint arXiv:2104.11347, 2021.
- A. Lugmayr, M. Danelljan, A. Romero, F. Yu, R. Timofte, and L. Van Gool, “Repaint: Inpainting using denoising diffusion probabilistic models,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 11 461–11 471.
- L. Zhou, Y. Du, and J. Wu, “3d shape generation and completion through point-voxel diffusion,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 5826–5835.
- X. Zeng, A. Vahdat, F. Williams, Z. Gojcic, O. Litany, S. Fidler, and K. Kreis, “Lion: Latent point diffusion models for 3d shape generation,” in Advances in Neural Information Processing Systems, 2022.
- Z. Lyu, Z. Kong, X. Xu, L. Pan, and D. Lin, “A conditional point diffusion-refinement paradigm for 3d point cloud completion,” arXiv preprint arXiv:2112.03530, 2021.
- C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “Pointnet++: Deep hierarchical feature learning on point sets in a metric space,” Advances in neural information processing systems, vol. 30, 2017.
- P. Friedrich, J. Wolleb, F. Bieder, F. M. Thieringer, and P. C. Cattin, “Point cloud diffusion models for automatic implant generation,” arXiv preprint arXiv:2303.08061, 2023.
- F. Isensee, P. F. Jaeger, S. A. Kohl, J. Petersen, and K. H. Maier-Hein, “nnu-net: a self-configuring method for deep learning-based biomedical image segmentation,” Nature methods, vol. 18, no. 2, pp. 203–211, 2021.
- W. E. Lorensen and H. E. Cline, “Marching cubes: A high resolution 3d surface construction algorithm,” ACM siggraph computer graphics, vol. 21, no. 4, pp. 163–169, 1987.
- C. Yuksel, “Sample elimination for generating poisson disk sample sets,” in Computer Graphics Forum, vol. 34, no. 2. Wiley Online Library, 2015, pp. 25–32.
- Z. Liu, H. Tang, Y. Lin, and S. Han, “Point-voxel cnn for efficient 3d deep learning,” Advances in Neural Information Processing Systems, vol. 32, 2019.
- P. Ramachandran, B. Zoph, and Q. V. Le, “Searching for activation functions,” arXiv preprint arXiv:1710.05941, 2017.
- A. Nichol, H. Jun, P. Dhariwal, P. Mishkin, and M. Chen, “Point-e: A system for generating 3d point clouds from complex prompts,” arXiv preprint arXiv:2212.08751, 2022.
- A. X. Chang, T. A. Funkhouser, L. J. Guibas, P. Hanrahan, Q.-X. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su, J. Xiao, L. Yi, and F. Yu, “Shapenet: An information-rich 3d model repository,” ArXiv, 2015.