Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision (2308.16139v5)

Published 30 Aug 2023 in cs.CV, cs.DB, and cs.LG

Abstract: Prior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedback

Definition Search Book Streamline Icon: https://streamlinehq.com
References (133)
  1. A. Esteva, K. Chou, S. Yeung, N. Naik, A. Madani, A. Mottaghi, Y. Liu, E. Topol, J. Dean, and R. Socher, “Deep learning-enabled medical computer vision,” NPJ digital medicine, vol. 4, no. 1, pp. 1–9, 2021.
  2. T. Young, D. Hazarika, S. Poria, and E. Cambria, “Recent trends in deep learning based natural language processing,” ieee Computational intelligenCe magazine, vol. 13, no. 3, pp. 55–75, 2018.
  3. S. Latif, R. Rana, S. Khalifa, R. Jurdak, J. Qadir, and B. W. Schuller, “Deep representation learning in speech processing: Challenges, recent advances, and future trends,” arXiv preprint arXiv:2001.00378, 2020.
  4. C. Sun, A. Shrivastava, S. Singh, and A. Gupta, “Revisiting unreasonable effectiveness of data in deep learning era,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 843–852.
  5. J. Egger, C. Gsaxner, A. Pepe, K. L. Pomykala, F. Jonske, M. Kurz, J. Li, and J. Kleesiek, “Medical deep learning—a systematic meta-review,” Computer Methods and Programs in Biomedicine, vol. 221, p. 106874, 2022.
  6. J. Egger, A. Pepe, C. Gsaxner, Y. Jin, J. Li, and R. Kern, “Deep learning—a first meta-survey of selected reviews across scientific disciplines, their commonalities, challenges and research impact,” PeerJ Computer Science, vol. 7, p. e773, 2021.
  7. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition.   Ieee, 2009, pp. 248–255.
  8. A. Krizhevsky, G. Hinton et al., “Learning multiple layers of features from tiny images,” 2009.
  9. A. Taylor, M. Marcus, and B. Santorini, “The penn treebank: an overview,” Treebanks: Building and using parsed corpora, pp. 5–22, 2003.
  10. S. Merity, C. Xiong, J. Bradbury, and R. Socher, “Pointer sentinel mixture models,” arXiv preprint arXiv:1609.07843, 2016.
  11. V. Panayotov, G. Chen, D. Povey, and S. Khudanpur, “Librispeech: an asr corpus based on public domain audio books,” in 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP).   IEEE, 2015, pp. 5206–5210.
  12. Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao, “3d shapenets: A deep representation for volumetric shapes,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1912–1920.
  13. A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su et al., “Shapenet: An information-rich 3d model repository,” arXiv preprint arXiv:1512.03012, 2015.
  14. M.-X. Lin, J. Yang, H. Wang, Y.-K. Lai, R. Jia, B. Zhao, and L. Gao, “Single image 3d shape retrieval via cross-modal instance and category contrastive learning,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 11 405–11 415.
  15. X. Yan, L. Lin, N. J. Mitra, D. Lischinski, D. Cohen-Or, and H. Huang, “Shapeformer: Transformer-based shape completion via sparse representation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 6239–6249.
  16. L. Yi, L. Shao, M. Savva, H. Huang, Y. Zhou, Q. Wang, B. Graham, M. Engelcke, R. Klokov, V. Lempitsky et al., “Large-scale 3d shape reconstruction and segmentation from shapenet core55,” arXiv preprint arXiv:1710.06104, 2017.
  17. I. Sarasua, S. Pölsterl, and C. Wachinger, “Hippocampal representations for deep learning on alzheimer’s disease,” Scientific reports, vol. 12, no. 1, p. 8619, 2022.
  18. T. Heimann and H.-P. Meinzer, “Statistical shape models for 3d medical image segmentation: a review,” Medical image analysis, vol. 13, no. 4, pp. 543–563, 2009.
  19. L. Petrelli, A. Pepe, A. Disanto, C. Gsaxner, J. Li, Y. Jin, D. Buongiorno, A. Brunetti, V. Bevilacqua, and J. Egger, “Geometric modeling of aortic dissections through convolution surfaces,” in Medical Imaging 2022: Imaging Informatics for Healthcare, Research, and Applications, vol. 12037.   SPIE, 2022, pp. 198–206.
  20. J. Yang, U. Wickramasinghe, B. Ni, and P. Fua, “Implicitatlas: learning deformable shape templates in medical imaging,” in CVPR, 2022, pp. 15 861–15 871.
  21. M. Rezanejad, M. Khodadad, H. Mahyar, H. Lombaert, M. Gruninger, D. Walther, and K. Siddiqi, “Medial spectral coordinates for 3d shape analysis,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 2686–2696.
  22. K. Kania, S. J. Garbin, A. Tagliasacchi, V. Estellers, K. M. Yi, J. Valentin, T. Trzciński, and M. Kowalski, “Blendfields: Few-shot example-driven facial modeling,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 404–415.
  23. M. Keller, S. Zuffi, M. J. Black, and S. Pujades, “Osso: Obtaining skeletal shape from outside,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 20 492–20 501.
  24. J. Li, A. Pepe, C. Gsaxner, G. v. Campe, and J. Egger, “A baseline approach for autoimplant: the miccai 2020 cranial implant design challenge,” in Workshop on Clinical Image-Based Procedures.   Springer, 2020, pp. 75–84.
  25. A. Morais, J. Egger, and V. Alves, “Automated computer-aided design of cranial implants using a deep volumetric convolutional denoising autoencoder,” in World conference on information systems and technologies.   Springer, 2019, pp. 151–160.
  26. J. Li, P. Pimentel, A. Szengel, M. Ehlke, H. Lamecker, S. Zachow, L. Estacio, C. Doenitz, H. Ramm, H. Shi et al., “Autoimplant 2020-first miccai challenge on automatic cranial implant design,” IEEE transactions on medical imaging, vol. 40, no. 9, pp. 2329–2342, 2021.
  27. J. Li, G. von Campe, A. Pepe, C. Gsaxner, E. Wang, X. Chen, U. Zefferer, M. Tödtling, M. Krall, H. Deutschmann et al., “Automatic skull defect restoration and cranial implant generation for cranioplasty,” Medical Image Analysis, vol. 73, p. 102171, 2021.
  28. J. Li, D. G. Ellis, O. Kodym, L. Rauschenbach, C. Rieß, U. Sure, K. H. Wrede, C. M. Alvarez, M. Wodzinski, M. Daniol et al., “Towards clinical applicability and computational efficiency in automatic cranial implant design: An overview of the autoimplant 2021 cranial implant design challenge,” Medical Image Analysis, p. 102865, 2023.
  29. A. Dai, C. Ruizhongtai Qi, and M. Nießner, “Shape completion using 3d-encoder-predictor cnns and shape synthesis,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 5868–5877.
  30. J. Li, A. Pepe, G. Luijten, C. Schwarz-Gsaxner, and J. Kleesiek, “Anatomy completor: A multi-class completion framework for 3d anatomy reconstruction,” arXiv preprint, 2023.
  31. D. Zhang, F. Huang, M. Khansari, T. T. Berendschot, X. Xu, B. Dashtbozorg, Y. Sun, J. Zhang, and T. Tan, “Automatic corneal nerve fiber segmentation and geometric biomarker quantification,” The European Physical Journal Plus, vol. 135, no. 2, p. 266, 2020.
  32. C. Gsaxner, J. Li, A. Pepe, D. Schmalstieg, and J. Egger, “Inside-out instrument tracking for surgical navigation in augmented reality,” in Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology, 2021, pp. 1–11.
  33. T. Ohnishi, H. Matsuda, T. Tabira, T. Asada, and M. Uno, “Changes in brain morphology in alzheimer disease and normal aging: is alzheimer disease an exaggerated aging process?” American Journal of Neuroradiology, vol. 22, no. 9, pp. 1680–1685, 2001.
  34. J.-H. Deng, H.-W. Zhang, X.-L. Liu, H.-Z. Deng, and F. Lin, “Morphological changes in parkinson’s disease based on magnetic resonance imaging: A mini-review of subcortical structures segmentation and shape analysis,” World Journal of Psychiatry, vol. 12, no. 12, p. 1356, 2022.
  35. H. Akbari, L. Macyszyn, X. Da, M. Bilello, R. L. Wolf, M. Martinez-Lage, G. Biros, M. Alonso-Basanta, D. M. O’Rourke, and C. Davatzikos, “Imaging surrogates of infiltration obtained via multiparametric imaging pattern analysis predict subsequent location of recurrence of glioblastoma,” Neurosurgery, vol. 78, no. 4, p. 572, 2016.
  36. F. Seker-Polat, N. Pinarbasi Degirmenci, I. Solaroglu, and T. Bagci-Onder, “Tumor cell infiltration into the brain in glioblastoma: from mechanisms to clinical perspectives,” Cancers, vol. 14, no. 2, p. 443, 2022.
  37. J. Li, C. Gsaxner, A. Pepe, D. Schmalstieg, J. Kleesiek, and J. Egger, “Sparse convolutional neural network for high-resolution skull shape completion and shape super-resolution,” Scientific Reports, vol. 13, 2023.
  38. L. Jin, S. Gu, D. Wei, J. K. Adhinarta, K. Kuang, Y. J. Zhang, H. Pfister, B. Ni, J. Yang, and M. Li, “Ribseg v2: A large-scale benchmark for rib labeling and anatomical centerline extraction,” IEEE Transactions on Medical Imaging, 2023.
  39. U. Wickramasinghe, P. Jensen, M. Shah, J. Yang, and P. Fua, “Weakly supervised volumetric image segmentation with deformed templates,” in MICCAI.   Springer, 2022, pp. 422–432.
  40. J. W. de Kok, M. Á. A. de la Hoz, Y. de Jong, V. Brokke, P. W. Elbers, P. Thoral, A. Castillejo, T. Trenor, J. M. Castellano, A. E. Bronchalo et al., “A guide to sharing open healthcare data under the general data protection regulation,” Scientific data, vol. 10, no. 1, p. 404, 2023.
  41. C. Qu, T. Zhang, H. Qiao, J. Liu, Y. Tang, A. Yuille, and Z. Zhou, “Abdomenatlas-8k: Annotating 8,000 abdominal ct volumes for multi-organ segmentation in three weeks,” Conference on Neural Information Processing Systems, 2023.
  42. J. Ma, Y. Zhang, S. Gu, C. Zhu, C. Ge, Y. Zhang, X. An, C. Wang, Q. Wang, X. Liu, S. Cao, Q. Zhang, S. Liu, Y. Wang, Y. Li, J. He, and X. Yang, “Abdomenct-1k: Is abdominal organ segmentation a solved problem?” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 10, pp. 6695–6714, 2022.
  43. Y. Ji, H. Bai, C. Ge, J. Yang, Y. Zhu, R. Zhang, Z. Li, L. Zhanng, W. Ma, X. Wan et al., “Amos: A large-scale abdominal multi-organ benchmark for versatile medical image segmentation,” Advances in Neural Information Processing Systems, vol. 35, pp. 36 722–36 732, 2022.
  44. R. Gharleghi, D. Adikari, K. Ellenberger, S.-Y. Ooi, C. Ellis, C.-M. Chen, R. Gao, Y. He, R. Hussain, C.-Y. Lee, J. Li, J. Ma, Z. Nie, B. Oliveira, Y. Qi, Y. Skandarani, J. L. Vilaça, X. Wang, S. Yang, A. Sowmya, and S. Beier, “Automated segmentation of normal and diseased coronary arteries – the asoca challenge,” Computerized Medical Imaging and Graphics, vol. 97, p. 102049, 2022. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0895611122000222
  45. R. Gharleghi, D. Adikari, K. Ellenberger, M. Webster, C. Ellis, A. Sowmya, S. Ooi, and S. Beier, “Annotated computed tomography coronary angiogram images and associated data of normal and diseased arteries,” Scientific Data, vol. 10, no. 1, p. 128, 2023.
  46. A. Jaus, C. Seibold, K. Hermann, A. Walter, K. Giske, J. Haubold, J. Kleesiek, and R. Stiefelhagen, “Towards unifying anatomy segmentation: Automated generation of a full-body ct dataset via knowledge aggregation and anatomical guidelines,” arXiv preprint arXiv:2307.13375, 2023.
  47. S. Gatidis, T. Hepp, M. Früh, C. La Fougère, K. Nikolaou, C. Pfannenberg, B. Schölkopf, T. Küstner, C. Cyran, and D. Rubin, “A whole-body fdg-pet/ct dataset with manually annotated tumor lesions,” Scientific Data, vol. 9, no. 1, p. 601, 2022.
  48. S. Gatidis, M. Früh, M. Fabritius, S. Gu, K. Nikolaou, C. La Fougère, J. Ye, J. He, Y. Peng, L. Bi, J. Ma, B. Wang, J. Zhang, Y. Huang, L. Heiliger, Z. Marinov, R. Stiefelhagen, J. Egger, J. Kleesiek, L. Sibille, L. Xiang, S. Bendazolli, M. Astaraki, B. Schölkopf, M. Ingrisch, C. Cyran, and T. Küstner, “The autopet challenge: Towards fully automated lesion segmentation in oncologic pet/ct imaging,” Research Square preprint doi.org/10.21203/rs.3.rs-2572595/v1, 2023.
  49. S. Gatidis and T. Küstner, “A whole-body fdg-pet/ct dataset with manually annotated tumor lesions,” The Cancer Imaging Archive, 2022. [Online]. Available: https://doi.org/10.7937/gkr0-xv29
  50. L. Radl, Y. Jin, A. Pepe, J. Li, C. Gsaxner, F.-h. Zhao, and J. Egger, “Avt: Multicenter aortic vessel tree cta dataset collection with ground truth segmentation masks,” Data in Brief, vol. 40, p. 107801, 2022.
  51. U. Baid, S. Ghodasara, S. Mohan, M. Bilello, E. Calabrese, E. Colak, K. Farahani, J. Kalpathy-Cramer, F. C. Kitamura, S. Pati et al., “The rsna-asnr-miccai brats 2021 benchmark on brain tumor segmentation and radiogenomic classification,” arXiv preprint arXiv:2107.02314, 2021.
  52. B. H. Menze, A. Jakab, S. Bauer, J. Kalpathy-Cramer, K. Farahani, J. Kirby, Y. Burren, N. Porz, J. Slotboom, R. Wiest et al., “The multimodal brain tumor image segmentation benchmark (brats),” IEEE transactions on medical imaging, vol. 34, no. 10, pp. 1993–2024, 2014.
  53. S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J. S. Kirby, J. B. Freymann, K. Farahani, and C. Davatzikos, “Advancing the cancer genome atlas glioma mri collections with expert segmentation labels and radiomic features,” Scientific data, vol. 4, no. 1, pp. 1–13, 2017.
  54. R. Souza, O. Lucena, J. Garrafa, D. Gobbi, M. Saluzzi, S. Appenzeller, L. Rittner, R. Frayne, and R. Lotufo, “An open, multi-vendor, multi-field-strength brain mr dataset and analysis of publicly available skull stripping methods agreement,” NeuroImage, vol. 170, pp. 482–494, 2018.
  55. J. Shapey, A. Kujawa, R. Dorent, G. Wang, A. Dimitriadis, D. Grishchuk, I. Paddick, N. Kitchen, R. Bradford, S. R. Saeed et al., “Segmentation of vestibular schwannoma from mri, an open annotated dataset and baseline algorithm,” Scientific Data, vol. 8, no. 1, p. 286, 2021.
  56. R. Dorent, A. Kujawa, M. Ivory, S. Bakas, N. Rieke, S. Joutard, B. Glocker, J. Cardoso, M. Modat, K. Batmanghelich et al., “Crossmoda 2021 challenge: Benchmark of cross-modality domain adaptation techniques for vestibular schwannoma and cochlea segmentation,” Medical Image Analysis, vol. 83, p. 102628, 2023.
  57. B. Rister, D. Yi, K. Shivakumar, T. Nobashi, and D. L. Rubin, “Ct-org, a new dataset for multiple organ segmentation in computed tomography,” Scientific Data, vol. 7, no. 1, p. 381, 2020.
  58. V. Vandenbossche, J. Van de Velde, S. Avet, W. Willaert, S. Soltvedt, N. Smit, and E. Audenaert, “Digital body preservation: Technique and applications,” Anatomical Sciences Education, vol. 15, no. 4, pp. 731–744, 2022.
  59. A. Lalande, Z. Chen, T. Decourselle, A. Qayyum, T. Pommier, L. Lorgis, E. de La Rosa, A. Cochet, Y. Cottin, D. Ginhac et al., “Emidec: a database usable for the automatic evaluation of myocardial infarction from delayed-enhancement cardiac mri,” Data, vol. 5, no. 4, p. 89, 2020.
  60. A. Lalande, Z. Chen, T. Pommier, T. Decourselle, A. Qayyum, M. Salomon, D. Ginhac, Y. Skandarani, A. Boucher, K. Brahim et al., “Deep learning methods for automatic evaluation of delayed enhancement-mri. the results of the emidec challenge,” Medical Image Analysis, vol. 79, p. 102428, 2022.
  61. C. Gsaxner, J. Wallner, X. Chen, W. Zemann, and J. Egger, “Facial model collection for medical augmented reality in oncologic cranio-maxillofacial surgery,” Scientific data, vol. 6, no. 1, pp. 1–7, 2019.
  62. J. Ma, Y. Zhang, S. Gu, X. An, Z. Wang, C. Ge, C. Wang, F. Zhang, Y. Wang, Y. Xu et al., “Fast and low-gpu-memory abdomen ct organ segmentation: the flare challenge,” Medical Image Analysis, vol. 82, p. 102616, 2022.
  63. A. L. Simpson, M. Antonelli, S. Bakas, M. Bilello, K. Farahani, B. Van Ginneken, A. Kopp-Schneider, B. A. Landman, G. Litjens, B. Menze et al., “A large annotated medical image dataset for the development and evaluation of segmentation algorithms,” arXiv preprint arXiv:1902.09063, 2019.
  64. J. Ma, Y. Zhang, S. Gu, C. Ge, S. Ma, A. Young, C. Zhu, K. Meng, X. Yang, Z. Huang, F. Zhang, W. Liu, Y. Pan, S. Huang, J. Wang, M. Sun, W. Xu, D. Jia, J. W. Choi, N. Alves, B. de Wilde, G. Koehler, Y. Wu, M. Wiesenfarth, Q. Zhu, G. Dong, J. He, the FLARE Challenge Consortium, and B. Wang, “Unleashing the strengths of unlabeled data in pan-cancer abdominal organ quantification: the flare22 challenge,” arXiv preprint arXiv:2308.05862, 2023.
  65. N. Shusharina and T. Bortfeld, “Glioma image segmentation for radiotherapy: Rt targets, barriers to cancer spread, and organs at risk (glis-rt),” The Cancer Imaging Archive, 2021. [Online]. Available: https://doi.org/10.7937/TCIA.T905-ZQ20
  66. N. Shusharina, T. Bortfeld, C. Cardenas, B. De, K. Diao, S. Hernandez, Y. Liu, S. Maroongroge, J. Söderberg, and M. Soliman, “Cross-modality brain structures image segmentation for the radiotherapy target definition and plan optimization,” in Segmentation, Classification, and Registration of Multi-modality Medical Imaging Data: MICCAI 2020 Challenges, ABCs 2020, L2R 2020, TN-SCUI 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4–8, 2020, Proceedings 23.   Springer, 2021, pp. 3–15.
  67. N. Shusharina, J. Söderberg, D. Edmunds, F. Löfman, H. Shih, and T. Bortfeld, “Automated delineation of the clinical target volume using anatomically constrained 3d expansion of the gross tumor volume,” Radiotherapy and Oncology, vol. 146, pp. 37–43, 2020.
  68. J. S. Elam, M. F. Glasser, M. P. Harms, S. N. Sotiropoulos, J. L. Andersson, G. C. Burgess, S. W. Curtiss, R. Oostenveld, L. J. Larson-Prior, J.-M. Schoffelen et al., “The human connectome project: a retrospective,” NeuroImage, vol. 244, p. 118543, 2021.
  69. V. Andrearczyk, V. Oreiller, M. Abobakr, A. Akhavanallaf, P. Balermpas, S. Boughdad, L. Capriotti, J. Castelli, C. Cheze Le Rest, P. Decazes, R. Correia et al., “Overview of the HECKTOR challenge at MICCAI 2022: Automatic head and neck tumor segmentation and outcome prediction in PET/CT,” in Head and Neck Tumor Segmentation and Outcome Prediction.   Springer, 2022, pp. 1–30.
  70. V. Oreiller, V. Andrearczyk, M. Jreige, S. Boughdad, H. Elhalawani, J. Castelli, M. Vallieres, S. Zhu, J. Xie, Y. Peng et al., “Head and neck tumor segmentation in pet/ct: the hecktor challenge,” Medical image analysis, vol. 77, p. 102336, 2022.
  71. M. R. Hernandez Petzsche, E. de la Rosa, U. Hanning, R. Wiest, W. Valenzuela, M. Reyes, M. Meyer, S.-L. Liew, F. Kofler, I. Ezhov et al., “Isles 2022: A multi-center magnetic resonance imaging stroke lesion segmentation dataset,” Scientific data, vol. 9, no. 1, p. 762, 2022.
  72. N. Heller, F. Isensee, K. H. Maier-Hein, X. Hou, C. Xie, F. Li, Y. Nan, G. Mu, Z. Lin, M. Han et al., “The state of the art in kidney and kidney tumor segmentation in contrast-enhanced ct imaging: Results of the kits19 challenge,” Medical Image Analysis, p. 101821, 2020.
  73. P. Bilic, P. Christ, H. B. Li, E. Vorontsov, A. Ben-Cohen, G. Kaissis, A. Szeskin, C. Jacobs, G. E. H. Mamani, G. Chartrand et al., “The liver tumor segmentation benchmark (lits),” Medical Image Analysis, vol. 84, p. 102680, 2023.
  74. J. Pedrosa, G. Aresta, C. Ferreira, M. Rodrigues, P. Leitão, A. S. Carvalho, J. Rebelo, E. Negrão, I. Ramos, A. Cunha et al., “Lndb: a lung nodule database on computed tomography,” arXiv preprint arXiv:1911.08434, 2019.
  75. J. Pedrosa, G. Aresta, C. Ferreira, G. Atwal, H. A. Phoulady, X. Chen, R. Chen, J. Li, L. Wang, A. Galdran et al., “Lndb challenge on automatic lung cancer patient management,” Medical image analysis, vol. 70, p. 102027, 2021.
  76. Y. Suter, U. Knecht, W. Valenzuela, M. Notter, E. Hewer, P. Schucht, R. Wiest, and M. Reyes, “The lumiere dataset: Longitudinal glioblastoma mri with expert rano evaluation,” Scientific data, vol. 9, no. 1, p. 768, 2022.
  77. J. Li, M. Krall, F. Trummer, A. R. Memon, A. Pepe, C. Gsaxner, Y. Jin, X. Chen, H. Deutschmann, U. Zefferer et al., “Mug500+: Database of 500 high-resolution healthy human skulls and 29 craniotomy skulls and implants,” Data in Brief, vol. 39, p. 107524, 2021.
  78. L. Lindner, D. Wild, M. Weber, M. Kolodziej, G. von Campe, and J. Egger, “Skull-stripped mri gbm datasets (and segmentations),” 6 2019.
  79. G. Litjens, R. Toth, W. Van De Ven, C. Hoeks, S. Kerkstra, B. Van Ginneken, G. Vincent, G. Guillard, N. Birbeck, J. Zhang et al., “Evaluation of prostate segmentation algorithms for mri: the promise12 challenge,” Medical image analysis, vol. 18, no. 2, pp. 359–373, 2014.
  80. Z. Weng, J. Yang, D. Liu, and W. Cai, “Topology repairing of disconnected pulmonary airways and vessels: Baselines and a dataset,” in MICCAI.   Springer, 2023.
  81. O. Kodym, J. Li, A. Pepe, C. Gsaxner, S. Chilamkurthy, J. Egger, and M. Španěl, “Skullbreak/skullfix–dataset for automatic cranial implant design and a benchmark for volumetric shape learning tasks,” Data in Brief, vol. 35, p. 106902, 2021.
  82. D. Angeles-Valdez, J. Rasgado-Toledo, V. Issa-Garcia, T. Balducci, V. Villicaña, A. Valencia, J. J. Gonzalez-Olvera, E. Reyes-Zamorano, and E. A. Garza-Villarreal, “The mexican magnetic resonance imaging dataset of patients with cocaine use disorder: Sudmex conn,” Scientific data, vol. 9, no. 1, p. 133, 2022.
  83. G. Luijten, C. Gsaxner, J. Li, A. Pepe, N. Ambigapathy, M. Kim, X. Chen, J. Kleesiek, F. Hölzle, B. Puladi, and J. Egger, “3d surgical instrument collection for computer vision and extended reality,” Scientific Data, vol. 10, 2023.
  84. A. Ben-Hamadou, O. Smaoui, A. Rekik, S. Pujades, E. Boyer, H. Lim, M. Kim, M. Lee, M. Chung, Y.-G. Shin, M. Leclercq, L. Cevidanes, J. C. Prieto, S. Zhuang, G. Wei, Z. Cui, Y. Zhou, T. Dascalu, B. Ibragimov, T.-H. Yong, H.-G. Ahn, W. Kim, J.-H. Han, B. Choi, N. van Nistelrooij, S. Kempers, S. Vinayahalingam, J. Strippoli, A. Thollot, H. Setbon, C. Trosset, and E. Ladroit, “3dteethseg’22: 3d teeth scan segmentation and labeling challenge,” arXiv preprint arXiv:2305.18277, 2023.
  85. A. Ben-Hamadou, O. Smaoui, H. Chaabouni-Chouayakh, A. Rekik, S. Pujades, E. Boyer, J. Strippoli, A. Thollot, H. Setbon, C. Trosset et al., “Teeth3ds: a benchmark for teeth segmentation and labeling from intra-oral 3d scans,” arXiv preprint arXiv:2210.06094, 2022.
  86. M. Cipriano, S. Allegretti, F. Bolelli, F. Pollastri, and C. Grana, “Improving Segmentation of the Inferior Alveolar Nerve through Deep Label Propagation,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).   IEEE, Jun 2022, pp. 21 137–21 146.
  87. F. Bolelli, L. Lumetti, M. Di Bartolomeo, S. Vinayahalingam, A. Anesi, B. van Ginneken, and C. Grana, “Tooth Fairy: A Cone-Beam Computed Tomography Segmentation Challenge,” in Structured Challenge.   Structured Challenge, Sep 2023.
  88. J. Wasserthal, H.-C. Breit, M. T. Meyer, M. Pradella, D. Hinck, A. W. Sauter, T. Heye, D. Boll, J. Cyriac, S. Yang et al., “Totalsegmentator: robust segmentation of 104 anatomical structures in ct images,” Radiology: Artificial Intelligence, vol. 5, 2023.
  89. A. Sekuboyina, M. Rempfler, A. Valentinitsch, B. H. Menze, and J. S. Kirschke, “Labeling vertebrae with two-dimensional reformations of multidetector ct images: an adversarial approach for incorporating prior knowledge of spine anatomy,” Radiology: Artificial Intelligence, vol. 2, no. 2, p. e190074, 2020.
  90. M. Eisenmann et al., “Why is the winner the best?” in Proceedings of The IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR), 2023.
  91. W. E. Lorensen and H. E. Cline, “Marching cubes: A high resolution 3d surface construction algorithm,” ACM siggraph computer graphics, vol. 21, no. 4, pp. 163–169, 1987.
  92. Q. Hu, Y. Chen, J. Xiao, S. Sun, J. Chen, A. L. Yuille, and Z. Zhou, “Label-free liver tumor segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 7422–7432.
  93. B. Li, Y.-C. Chou, S. Sun, H. Qiao, A. Yuille, and Z. Zhou, “Early detection and localization of pancreatic cancer by label-free tumor synthesis,” MICCAI Workshop on Big Task Small Data, 1001-AI, 2023.
  94. K. Kuang, L. Zhang, J. Li, H. Li, J. Chen, B. Du, and J. Yang, “What makes for automatic reconstruction of pulmonary segments,” in MICCAI.   Springer, 2022, pp. 495–505.
  95. K. Xie, J. Yang, D. Wei, Z. Weng, and P. Fua, “Efficient anatomical labeling of pulmonary tree structures via implicit point-graph networks,” arXiv preprint arXiv:2309.17329, 2023.
  96. F. Isensee, P. F. Jaeger, S. A. Kohl, J. Petersen, and K. H. Maier-Hein, “nnu-net: a self-configuring method for deep learning-based biomedical image segmentation,” Nature methods, vol. 18, no. 2, pp. 203–211, 2021.
  97. T. van Meegdenburg, J. Kleesiek, J. Egger, and S. Perrey, “Improvement in disease diagnosis in computed tomography images by correlating organ volumes with disease occurrences in humans,” BioMedInformatics, vol. 3, no. 3, pp. 526–542, 2023. [Online]. Available: https://www.mdpi.com/2673-7426/3/3/36
  98. M. Di Bartolomeo, A. Pellacani, F. Bolelli, M. Cipriano, L. Lumetti, S. Negrello, S. Allegretti, P. Minafra, F. Pollastri, R. Nocini, G. Colletti, L. Chiarini, C. Grana, and A. Anesi, “Inferior Alveolar Canal Automatic Detection with Deep Learning CNNs on CBCTs: Development of a Novel Model and Release of Open-Source Dataset and Algorithm,” Applied Sciences, vol. 13, no. 5, 2023.
  99. L. Lumetti, V. Pipoli, F. Bolelli, and C. Grana, “Annotating the Inferior Alveolar Canal: the Ultimate Tool,” in Image Analysis and Processing - ICIAP 2023.   Springer, Oct 2023, pp. 1–12.
  100. C. Mercadante, M. Cipriano, F. Bolelli, F. Pollastri, M. Di Bartolomeo, A. Anesi, and C. Grana, “A Cone Beam Computed Tomography Annotation Tool for Automatic Detection of the Inferior Alveolar Nerve Canal,” in Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, vol. 4.   SciTePress, Feb 2021, pp. 724–731.
  101. S. K. Warfield, K. H. Zou, and W. M. Wells, “Simultaneous truth and performance level estimation (STAPLE): an algorithm for the validation of image segmentation,” IEEE Transactions on Medical Imaging, vol. 23, no. 7, pp. 903–921, July 2004.
  102. O. Lucena, R. Souza, L. Rittner, R. Frayne, and R. Lotufo, “Convolutional neural networks for skull-stripping in brain mr imaging using silver standard masks,” Artificial intelligence in medicine, vol. 98, pp. 48–58, 2019.
  103. P. Saat, N. Nogovitsyn, M. Y. Hassan, M. A. Ganaie, R. Souza, and H. Hemmati, “A domain adaptation benchmark for t1-weighted brain magnetic resonance image segmentation,” Frontiers in Neuroinformatics, p. 96, 2022.
  104. G. Yiasemis, J.-J. Sonke, C. Sánchez, and J. Teuwen, “Recurrent variational network: a deep learning inverse problem solver applied to the task of accelerated mri reconstruction,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 732–741.
  105. Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3d u-net: learning dense volumetric segmentation from sparse annotation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2016: 19th International Conference, Athens, Greece, October 17-21, 2016, Proceedings, Part II 19.   Springer, 2016, pp. 424–432.
  106. A. Ferreira, J. Li, K. L. Pomykala, J. Kleesiek, V. Alves, and J. Egger, “Gan-based generation of realistic 3d data: A systematic review and taxonomy,” arXiv preprint arXiv:2207.01390, 2022.
  107. D. G. Ellis and M. R. Aizenberg, “Deep learning using augmentation via registration: 1st place solution to the autoimplant 2020 challenge,” in Towards the Automatization of Cranial Implant Design in Cranioplasty: First Challenge, AutoImplant 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 8, 2020, Proceedings 1.   Springer, 2020, pp. 47–55.
  108. K. Velarde, R. Cafino, A. Isla Jr, K. M. Ty, X.-L. Palmer, L. Potter, L. Nadorra, L. V. Pueblos, and L. C. Velasco, “Virtual surgical planning in craniomaxillofacial surgery: a structured review,” Computer Assisted Surgery, vol. 28, no. 1, p. 2271160, 2023.
  109. N. M. Laskay, J. A. George, L. Knowlin, T. P. Chang, J. M. Johnston, and J. Godzik, “Optimizing surgical performance using preoperative virtual reality planning: A systematic review,” World Journal of Surgery, pp. 1–11, 2023.
  110. T. T. Mueller, S. Zhou, S. Starck, F. Jungmann, A. Ziller, O. Aksoy, D. Movchan, R. Braren, G. Kaissis, and D. Rueckert, “Body fat estimation from surface meshes using graph neural networks,” in International Workshop on Shape in Medical Imaging.   Springer, 2023, pp. 105–117.
  111. L. Piecuch, V. Gonzales Duque, A. Sarcher, E. Hollville, A. Nordez, G. Rabita, G. Guilhem, and D. Mateus, “Muscle volume quantification: guiding transformers with anatomical priors,” in International Workshop on Shape in Medical Imaging.   Springer, 2023, pp. 173–187.
  112. B. Sauty and S. Durrleman, “Progression models for imaging data with longitudinal variational auto encoders,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.   Springer, 2022, pp. 3–13.
  113. D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv preprint arXiv:1312.6114, 2013.
  114. J. Amin, M. Sharif, M. Raza, T. Saba, and M. A. Anjum, “Brain tumor detection using statistical and machine learning method,” Computer methods and programs in biomedicine, vol. 177, pp. 69–79, 2019.
  115. J. Amin, M. Sharif, A. Haldorai, M. Yasmin, and R. S. Nayak, “Brain tumor detection and classification using machine learning: a comprehensive survey,” Complex & intelligent systems, pp. 1–23, 2021.
  116. J. Xin, Y. Zhang, Y. Tang, and Y. Yang, “Brain differences between men and women: Evidence from deep learning,” Frontiers in neuroscience, vol. 13, p. 185, 2019.
  117. S. Missal, “Forensic facial reconstruction of skeletonized and highly decomposed human remains,” in Forensic Genetic Approaches for Identification of Human Skeletal Remains.   Elsevier, 2023, pp. 549–569.
  118. N. Lampen, D. Kim, X. Xu, X. Fang, J. Lee, T. Kuang, H. H. Deng, M. A. Liebschner, J. J. Xia, J. Gateno et al., “Spatiotemporal incremental mechanics modeling of facial tissue change,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.   Springer, 2023, pp. 566–575.
  119. S. Damas, O. Cordón, O. Ibáñez, S. Damas, O. Cordón, and O. Ibáñez, “Relationships between the skull and the face for forensic craniofacial superimposition,” Handbook on Craniofacial Superimposition: The MEPROCS Project, pp. 11–50, 2020.
  120. J. Li, J. Fragemann, S.-A. Ahmadi, J. Kleesiek, and J. Egger, “Training β𝛽\betaitalic_β-vae by aggregating a learned gaussian posterior with a decoupled decoder,” in MICCAI Workshop on Medical Applications with Disentanglements.   Springer, 2022, pp. 70–92.
  121. P. Friedrich, J. Wolleb, F. Bieder, F. M. Thieringer, and P. C. Cattin, “Point cloud diffusion models for automatic implant generation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.   Springer, 2023, pp. 112–122.
  122. M. Wodzinski, M. Daniol, D. Hemmerling, and M. Socha, “High-resolution cranial defect reconstruction by iterative, low-resolution, point cloud completion transformers,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.   Springer, 2023, pp. 333–343.
  123. C. Gsaxner, J. Li, A. Pepe, Y. Jin, J. Kleesiek, D. Schmalstieg, and J. Egger, “The hololens in medicine: A systematic review and taxonomy,” Medical Image Analysis, p. 102757, 2023.
  124. K. A. Bölek, G. De Jong, and D. Henssen, “The effectiveness of the use of augmented reality in anatomy education: a systematic review and meta-analysis,” Scientific Reports, vol. 11, no. 1, p. 15292, 2021.
  125. K. Krieger et al., “Multimodal extended reality applications offer benefits for volumetric biomedical image analysis in research and medicine,” arXiv preprint arXiv:2311.03986, 2023.
  126. J. Yang, R. Shi, D. Wei, Z. Liu, L. Zhao, B. Ke, H. Pfister, and B. Ni, “Medmnist v2-a large-scale lightweight benchmark for 2d and 3d biomedical image classification,” Scientific Data, vol. 10, no. 1, p. 41, 2023.
  127. J. Wang and A. L. Yuille, “Semantic part segmentation using compositional model combining shape and appearance,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1788–1797.
  128. N. Ravi, J. Reizenstein, D. Novotny, T. Gordon, W.-Y. Lo, J. Johnson, and G. Gkioxari, “Accelerating 3d deep learning with pytorch3d,” arXiv preprint arXiv:2007.08501, 2020.
  129. N. Khalid, A. Qayyum, M. Bilal, A. Al-Fuqaha, and J. Qadir, “Privacy-preserving artificial intelligence in healthcare: Techniques and applications,” Computers in Biology and Medicine, p. 106848, 2023.
  130. C. G. Schwarz, W. K. Kremers, T. M. Therneau, R. R. Sharp, J. L. Gunter, P. Vemuri, A. Arani, A. J. Spychalla, K. Kantarci, D. S. Knopman et al., “Identification of anonymous mri research participants with face-recognition software,” New England Journal of Medicine, vol. 381, no. 17, pp. 1684–1686, 2019.
  131. F. Gießler, M. Thormann, B. Preim, D. Behme, and S. Saalfeld, “Facial feature removal for anonymization of neurological image data,” in Current Directions in Biomedical Engineering, vol. 7, no. 1.   De Gruyter, 2021, pp. 130–134.
  132. J. McLaughlin, S. Fang, J. Huang, L. Robinson, S. Jacobson, T. Foroud, and H. E. Hoyme, “Interactive feature visualization and detection for 3d face classification,” in 9th IEEE International Conference on Cognitive Informatics (ICCI’10).   IEEE, 2010, pp. 160–167.
  133. K. Suzuki, H. Nakano, K. Inoue, Y. Nakajima, S. Mizobuchi, M. Omori, N. Kato-Kogoe, K. Mishima, and T. Ueno, “Examination of new parameters for sex determination of mandible using japanese computer tomography data,” Dentomaxillofacial Radiology, vol. 49, no. 5, p. 20190282, 2020.
Citations (22)

Summary

  • The paper presents MedShapeNet, a comprehensive dataset with over 100,000 high-quality 3D anatomical shapes derived from real patient imaging data.
  • It enhances research in medical imaging through an interactive web interface and Python API, facilitating applications like brain tumor classification and skull reconstruction.
  • The dataset provides robust benchmarks for reconstructive and variational tasks, promoting advancements in diagnostic accuracy and surgical planning.

Overview of "MedShapeNet - A Large-Scale Dataset of 3D Medical Shapes for Computer Vision"

The paper introduces MedShapeNet, a comprehensive and substantial dataset designed to bolster research in computer vision and facilitate advancements in medical imaging applications. This dataset addresses the need for high-quality 3D shape data in the medical domain, recognizing that state-of-the-art algorithms in computer vision—largely developed using non-medical datasets like ShapeNet and Princeton ModelNet—are not readily applicable to medical imaging challenges due to differences in data characteristics and requirements.

Core Contributions

MedShapeNet provides an extensive collection of anatomical shapes, including bones, organs, and vessels, paired with corresponding medical annotations. This dataset includes over 100,000 shapes derived directly from real patient imaging data and boasts unique features, including diverse anatomical representations and accessibility through an interactive web interface and a Python API. Key contributions of the dataset are:

  1. Diversification and Scalability: MedShapeNet enhances its dataset by including a wide array of anatomical shapes from various medical sources. It encompasses 23 datasets with data acquired from multiple imaging modalities such as CT, MRI, and PET scans, allowing diverse applications in virtual reality (VR), augmented reality (AR), mixed reality (MR), and medical education.
  2. Practical Applications: The dataset offers practical use cases, such as brain tumor classification, skull reconstruction, and anatomy completion. These applications demonstrate the potential of MedShapeNet in improving diagnostic processes and surgical planning through shape-based analysis and prediction models.
  3. Reconstructive and Variational Benchmarks: MedShapeNet provides benchmarks for reconstructive tasks (e.g., facial reconstruction) and variational tasks (e.g., modeling anatomical shape changes due to diseases or aging). By utilizing these benchmarks, researchers can train models to effectively reconstruct or predict anatomical shapes under various conditions.
  4. Innovation in Data Sharing: A user-friendly web interface and Python API ensure that researchers can readily access and utilize the dataset within standard computer vision workflows. Additionally, the continuous extension and open-source nature of MedShapeNet promote collaborative development and integration into existing and future research frameworks.

Implications for Medical and Computer Vision Communities

MedShapeNet serves as a bridge between medical imaging and computer vision disciplines, providing critical data that can lead to novel solutions in medical diagnostics and treatment planning. The dataset underscores the importance of shape-based methodologies in capturing morphological changes related to various medical conditions, which are often not represented by voxel-based approaches alone.

Privacy Considerations: By focusing on shape data, MedShapeNet mitigates privacy concerns associated with sharing medical imaging data. However, researchers are urged to handle the data ethically and adhere to regulatory requirements regarding patient anonymity and data sensitivity.

Future Directions: The dataset is poised to stimulate research in deep learning-based segmentation, anomaly detection, and pattern recognition within medical imaging contexts. Future expansions may include more datasets, especially targeting less common pathological conditions, thereby enhancing its utility further.

Speculative Developments: As AI continues to evolve, MedShapeNet could play a pivotal role in advancing personalized medicine, where machine learning models predict disease progression and inform patient-specific treatment strategies based on anatomical changes over time.

Conclusion

MedShapeNet represents a significant contribution to the medical imaging and computer vision fields, providing a foundational dataset that supports a wide range of applications from clinical diagnostics to educational tools. Its ongoing development promises to keep pace with the shifting landscape of AI in healthcare, fostering innovation and collaboration across disciplines.