Papers
Topics
Authors
Recent
Search
2000 character limit reached

A multi-stage semi-supervised learning for ankle fracture classification on CT images

Published 29 Mar 2024 in eess.IV and cs.CV | (2403.19983v1)

Abstract: Because of the complicated mechanism of ankle injury, it is very difficult to diagnose ankle fracture in clinic. In order to simplify the process of fracture diagnosis, an automatic diagnosis model of ankle fracture was proposed. Firstly, a tibia-fibula segmentation network is proposed for the joint tibiofibular region of the ankle joint, and the corresponding segmentation dataset is established on the basis of fracture data. Secondly, the image registration method is used to register the bone segmentation mask with the normal bone mask. Finally, a semi-supervised classifier is constructed to make full use of a large number of unlabeled data to classify ankle fractures. Experiments show that the proposed method can segment fractures with fracture lines accurately and has better performance than the general method. At the same time, this method is superior to classification network in several indexes.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (59)
  1. I. A. Bielska, X. Wang, R. Lee, and A. P. Johnson, “The health economics of ankle and foot sprains and fractures: A systematic review of english-language published papers. part 2: The direct and indirect costs of injury,” The Foot, vol. 39, pp. 115–121, 2019.
  2. A. S. Walsh, V. Sinclair, P. Watmough, and A. A. Henderson, “Ankle fractures: Getting it right first time,” The Foot, vol. 34, pp. 48–52, 2018.
  3. C. M. Costelloe and J. E. Madewell, “Radiography in the initial diagnosis of primary bone tumors,” American Journal of Roentgenology, vol. 200, no. 1, pp. 3–7, 2013.
  4. S. Yu, K. K. Tan, B. L. Sng, S. Li, and A. T. H. Sia, “Lumbar ultrasound image feature extraction and classification with support vector machine,” Ultrasound in medicine & biology, vol. 41, no. 10, pp. 2677–2689, 2015.
  5. C. Cortes and V. Vapnik, “Support-vector networks,” Machine learning, vol. 20, pp. 273–297, 1995.
  6. S. Kadoury, W. Mandel, M. Roy-Beaudry, M.-L. Nault, and S. Parent, “3-d morphology prediction of progressive spinal deformities from probabilistic modeling of discriminant manifolds,” IEEE transactions on medical imaging, vol. 36, no. 5, pp. 1194–1204, 2017.
  7. E. B. Dam, M. Lillholm, J. Marques, and M. Nielsen, “Automatic segmentation of high-and low-field knee mris using knee image quantification with data from the osteoarthritis initiative,” Journal of Medical imaging, vol. 2, no. 2, pp. 024 001–024 001, 2015.
  8. G. Kitamura, C. Y. Chung, and B. E. Moore, “Ankle fracture detection utilizing a convolutional neural network ensemble implemented with a small sample, de novo training, and multiview incorporation,” Journal of digital imaging, vol. 32, pp. 672–677, 2019.
  9. D. Pinto dos Santos, S. Brodehl, B. Baeßler, G. Arnhold, T. Dratsch, S.-H. Chon, P. Mildenberger, and F. Jungmann, “Structured report data can be used to develop deep learning algorithms: a proof of concept in ankle radiographs,” Insights into imaging, vol. 10, pp. 1–8, 2019.
  10. A. M. Budny and B. A. Young, “Analysis of radiographic classifications for rotational ankle fractures,” Clinics in podiatric medicine and surgery, vol. 25, no. 2, pp. 139–152, 2008.
  11. C. Alexandropoulos, S. Tsourvakas, J. Papachristos, A. Tselios, and P. Soukouli, “Ankle fracture classification: an evaluation of three classification systems: Lauge-hansen, ao and broos-bisschop.” Acta Orthopædica Belgica, vol. 76, no. 4, p. 521, 2010.
  12. N. LAUGE-HANSEN, “Ligamentous ankle fractures; diagnosis and treatment,” Acta chirurgica Scandinavica, vol. 97, no. 6, pp. 544–550, 1949.
  13. L. S. M. Pimenta and J. V. B. Correa, “Estudo experimental e radiografico das fraturas maleolares do tornozelo baseado na classificacao de lauge hansen,” 1991.
  14. A. Russo, A. Reginelli, M. Zappia, C. Rossi, O. Fabozzi, M. Cerrato, L. Macarini, and F. Coppolino, “Ankle fracture: radiographic approach according to the lauge-hansen classification,” Musculoskeletal surgery, vol. 97, pp. 155–160, 2013.
  15. R. Danis, “Les fractures malleolaires,” Theorie et Pratique de l’Osteosynthese, 1949.
  16. WELLER, “Weber, bg-die verletzungen des oberen sprunggelenkes,” 1967.
  17. P. Broos and A. Bisschop, “Operative treatment of ankle fractures in adults: correlation between types of fracture and final results,” Injury, vol. 22, no. 5, pp. 403–406, 1991.
  18. J. H. Nielson, M. J. Gardner, M. G. Peterson, J. G. Sallis, H. G. Potter, D. L. Helfet, and D. G. Lorich, “Radiographic measurements do not predict syndesmotic injury in ankle fractures: an mri study.” Clinical Orthopaedics and Related Research (1976-2007), vol. 436, pp. 216–221, 2005.
  19. E. G. Meinberg, J. Agel, C. S. Roberts, M. D. Karam, and J. F. Kellam, “Fracture and dislocation classification compendium—2018,” Journal of orthopaedic trauma, vol. 32, pp. S1–S10, 2018.
  20. P. D. med. Maurice E. Müller, D. med. Peter Koch, P. D. med. Serge Nazarian, and F. J. S. M.D., “The comprehensive classification of fractures of long bones,” in Springer Berlin Heidelberg, 1990. [Online]. Available: https://api.semanticscholar.org/CorpusID:36725522
  21. J. Olczak, F. Emilson, A. Razavian, T. Antonsson, A. Stark, and M. Gordon, “Ankle fracture classification using deep learning: automating detailed ao foundation/orthopedic trauma association (ao/ota) 2018 malleolar fracture identification reaches a high degree of correct classification,” Acta Orthopaedica, vol. 92, no. 1, pp. 102–108, 2020.
  22. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2016, pp. 770–778.
  23. Y. Zhang and J. Dong, “2k-fold-net and feature enhanced 4-fold-net for medical image segmentation,” Pattern Recognition, vol. 127, p. 108625, 2022.
  24. K. Wang, X. Zhang, X. Zhang, Y. Lu, S. Huang, and D. Yang, “Eanet: Iterative edge attention network for medical image segmentation,” Pattern Recognition, vol. 127, p. 108636, 2022.
  25. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention.   Springer, 2015, pp. 234–241.
  26. Z. Zhou, M. M. Rahman Siddiquee, N. Tajbakhsh, and J. Liang, “Unet++: A nested u-net architecture for medical image segmentation,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 20, 2018, Proceedings 4.   Springer, 2018, pp. 3–11.
  27. C. Peng, X. Zhang, G. Yu, G. Luo, and J. Sun, “Large kernel matters–improve semantic segmentation by global convolutional network,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4353–4361.
  28. L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 4, pp. 834–848, 2017.
  29. H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2881–2890.
  30. A. Oliver, A. Odena, C. A. Raffel, E. D. Cubuk, and I. Goodfellow, “Realistic evaluation of deep semi-supervised learning algorithms,” Advances in neural information processing systems, vol. 31, 2018.
  31. D.-H. Lee et al., “Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks,” in International Conference on Machine Learning.   Atlanta, 2013, p. 896.
  32. D. Berthelot, N. Carlini, I. Goodfellow, A. Oliver, N. Papernot, and C. Raffel, “Mixmatch: a holistic approach to semi-supervised learning,” in Advances in neural information processing systems, 2019, pp. 5049–5059.
  33. P. J. Besl and N. D. McKay, “Method for registration of 3-d shapes,” in Sensor fusion IV: control paradigms and data structures, vol. 1611.   Spie, 1992, pp. 586–606.
  34. J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132–7141.
  35. X. Zhang, Y. Qiang, F. Sung, Y. Yang, and T. M. Hospedales, “Relationnet2: Deep comparison columns for few-shot learning,” arXiv preprint arXiv:1811.07100, 2018.
  36. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition.   Ieee, 2009, pp. 248–255.
  37. R. Kikinis, S. D. Pieper, and K. G. Vosburgh, “3d slicer: a platform for subject-specific image analysis, visualization, and clinical support,” in Intraoperative imaging and image-guided therapy.   Springer, 2013, pp. 277–289.
  38. P. A. Yushkevich, J. Piven, H. C. Hazlett, R. G. Smith, S. Ho, J. C. Gee, and G. Gerig, “User-guided 3d active contour segmentation of anatomical structures: significantly improved efficiency and reliability,” Neuroimage, vol. 31, no. 3, pp. 1116–1128, 2006.
  39. A. L. Simpson, M. Antonelli, S. Bakas, M. Bilello, K. Farahani, B. Van Ginneken, A. Kopp-Schneider, B. A. Landman, G. Litjens, B. Menze et al., “A large annotated medical image dataset for the development and evaluation of segmentation algorithms,” arXiv preprint arXiv:1902.09063, 2019.
  40. O. T. Association et al., “Fracture and dislocation compendium,” J Orthop Trauma, vol. 10, no. 1, pp. 1–55, 1996.
  41. R. W. Sanders, “Ao/ota fracture and dislocation classification compendium 2018,” Journal of Orthopaedic Trauma, 2018.
  42. F. Milletari, N. Navab, and A. Ahmadi, “V-net: Fully convolutional neural networks for volumetric medical image segmentation,” in IEEE International Conference on 3DVision, 2016.
  43. A. Hatamizadeh, Y. Tang, V. Nath, D. Yang, A. Myronenko, B. Landman, H. R. Roth, and D. Xu, “Unetr: Transformers for 3d medical image segmentation,” in Proceedings of the IEEE winter conference on applications of computer vision, 2022, pp. 574–584.
  44. A. Myronenko, “3d mri brain tumor segmentation using autoencoder regularization,” in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, 2019, pp. 311–320.
  45. A. Hatamizadeh, V. Nath, Y. Tang, D. Yang, H. R. Roth, and D. Xu, “Swin unetr: Swin transformers for semantic segmentation of brain tumors in mri images,” in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries.   Springer, 2021, pp. 272–284.
  46. O. Oktay, J. Schlemper, L. Le Folgoc, M. Lee, M. Heinrich, K. Misawa, K. Mori, S. McDonagh, N. Y. Hammerla, B. Kainz et al., “Attention u-net: Learning where to look for the pancreas,” in Medical Imaging with Deep Learning, 2022.
  47. F. Isensee, P. F. Jaeger, S. A. Kohl, J. Petersen, and K. H. Maier-Hein, “nnu-net: a self-configuring method for deep learning-based biomedical image segmentation,” Nature methods, vol. 18, no. 2, pp. 203–211, 2021.
  48. A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo, P. Dollar, and R. Girshick, “Segment anything,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 4015–4026.
  49. Q. Liu, L. Yu, L. Luo, Q. Dou, and P. A. Heng, “Semi-supervised medical image classification with relation-driven self-ensembling model,” IEEE transactions on medical imaging, vol. 39, no. 11, pp. 3429–3440, 2020.
  50. F. Liu, Y. Tian, F. R. Cordeiro, V. Belagiannis, I. Reid, and G. Carneiro, “Self-supervised mean teacher for semi-supervised chest x-ray classification,” in International Workshop on Machine Learning in Medical Imaging, 2021, pp. 426–436.
  51. A. Rasmus, H. Valpola, M. Honkala, M. Berglund, and T. Raiko, “Semi-supervised learning with ladder networks,” in Advances in neural information processing systems, 2015, pp. 3546–3554.
  52. K. Sohn, D. Berthelot, N. Carlini, Z. Zhang, H. Zhang, C. A. Raffel, E. D. Cubuk, A. Kurakin, and C.-L. Li, “Fixmatch: Simplifying semi-supervised learning with consistency and confidence,” in Advances in neural information processing systems, 2020, pp. 596–608.
  53. D. Berthelot, N. Carlini, E. D. Cubuk, A. Kurakin, K. Sohn, H. Zhang, and C. Raffel, “Remixmatch: Semi-supervised learning with distribution matching and augmentation anchoring,” in International Conference on Learning Representations, 2020.
  54. F. Liu, Y. Tian, Y. Chen, Y. Liu, V. Belagiannis, and G. Carneiro, “Acpl: Anti-curriculum pseudo-labelling for semi-supervised medical image classification,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 20 697–20 706.
  55. Q. Zeng, Y. Xie, Z. Lu, and Y. Xia, “Pefat: Boosting semi-supervised medical image classification via pseudo-loss estimation and feature adversarial training,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 15 671–15 680.
  56. G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2017, pp. 4700–4708.
  57. M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” in International conference on machine learning.   PMLR, 2019, pp. 6105–6114.
  58. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” in International Conference on Learning Representations, 2020.
  59. Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 10 012–10 022.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.