Detecting Bone Lesions in X-Ray Under Diverse Acquisition Conditions (2212.07792v2)
Abstract: The diagnosis of primary bone tumors is challenging, as the initial complaints are often non-specific. Early detection of bone cancer is crucial for a favorable prognosis. Incidentally, lesions may be found on radiographs obtained for other reasons. However, these early indications are often missed. In this work, we propose an automatic algorithm to detect bone lesions in conventional radiographs to facilitate early diagnosis. Detecting lesions in such radiographs is challenging: first, the prevalence of bone cancer is very low; any method must show high precision to avoid a prohibitive number of false alarms. Second, radiographs taken in health maintenance organizations (HMOs) or emergency departments (EDs) suffer from inherent diversity due to different X-ray machines, technicians and imaging protocols. This diversity poses a major challenge to any automatic analysis method. We propose to train an off-the-shelf object detection algorithm to detect lesions in radiographs. The novelty of our approach stems from a dedicated preprocessing stage that directly addresses the diversity of the data. The preprocessing consists of self-supervised region-of-interest detection using vision transformer (ViT), and a foreground-based histogram equalization for contrast enhancement to relevant regions only. We evaluate our method via a retrospective study that analyzes bone tumors on radiographs acquired from January 2003 to December 2018 under diverse acquisition protocols. Our method obtains 82.43% sensitivity at 1.5% false-positive rate and surpasses existing preprocessing methods. For lesion detection, our method achieves 82.5% accuracy and an IoU of 0.69. The proposed preprocessing method enables to effectively cope with the inherent diversity of radiographs acquired in HMOs and EDs.
- T. M. Jackson, M. Bittman, and L. Granowetter, “Pediatric malignant bone tumors: A review and update on current challenges, and emerging drug targets,” Current Problems in Pediatric and Adolescent Health Care, vol. 46, pp. 213–228, 7 2016.
- R. L. Siegel, K. D. Miller, H. E. Fuchs, and A. Jemal, “Cancer statistics, 2022,” CA: A Cancer Journal for Clinicians, vol. 72, pp. 7–33, 1 2022.
- M. Salom, C. Chiari, J. M. G. Alessandri, M. Willegger, R. Windhager, and I. Sanpera, “Diagnosis and staging of malignant bone tumours in children: What is due and what is new?” Journal of Children’s Orthopaedics, vol. 15, pp. 312–321, 8 2021.
- D. N. Mintz and S. Hwang, “Bone tumor imaging, then and now,” HSS Journal ®, vol. 10, pp. 230–239, 10 2014.
- X. Tang, “The role of artificial intelligence in medical imaging research,” BJR—Open, vol. 2, p. 20190031, 11 2020.
- D. Bell and A. Shetty, “X-ray artifacts,” 1 2014.
- T. S. Perry, “Andrew ng x-rays the ai hype,” IEEE Spectrum, 2021.
- Y. He, I. Pan, B. Bao, K. Halsey, M. Chang, H. Liu, S. Peng, R. A. Sebro, J. Guan, T. Yi, A. T. Delworth, F. Eweje, L. J. States, P. J. Zhang, Z. Zhang, J. Wu, X. Peng, and H. X. Bai, “Deep learning-based classification of primary bone tumors on radiographs: A preliminary study,” eBioMedicine, vol. 62, p. 103121, 12 2020.
- C. E. von Schacky, N. J. Wilhelm, V. S. Schäfer, Y. Leonhardt, F. G. Gassert, S. C. Foreman, F. T. Gassert, M. Jung, P. M. Jungmann, M. F. Russe, C. Mogler, C. Knebel, R. von Eisenhart-Rothe, M. R. Makowski, K. Woertler, R. Burgkart, and A. S. Gersing, “Multitask deep learning for segmentation and classification of primary bone tumors on radiographs,” Radiology, p. 204531, Sep. 2021. [Online]. Available: https://doi.org/10.1148/radiol.2021204531
- S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. ter Haar Romeny, J. B. Zimmerman, and K. Zuiderveld, “Adaptive histogram equalization and its variations,” Computer Vision, Graphics, and Image Processing, vol. 39, pp. 355–368, 9 1987.
- K. ZUIDERVELD, “Contrast limited adaptive histogram equalization,” Graphics Gems, vol. 0, pp. 474–485, 1994. [Online]. Available: https://cir.nii.ac.jp/crid/1571698601099987968
- A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,” CoRR, vol. abs/2010.11929, 2020. [Online]. Available: https://arxiv.org/abs/2010.11929
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” CoRR, vol. abs/1706.03762, 2017. [Online]. Available: http://arxiv.org/abs/1706.03762
- M. Caron, H. Touvron, I. Misra, H. Jégou, J. Mairal, P. Bojanowski, and A. Joulin, “Emerging properties in self-supervised vision transformers,” CoRR, vol. abs/2104.14294, 2021. [Online]. Available: https://arxiv.org/abs/2104.14294
- S. Amir, Y. Gandelsman, S. Bagon, and T. Dekel, “Deep vit features as dense visual descriptors,” arXiv preprint arXiv:2112.05814, 2021.
- B. D. T. A, “X-ray image acquisition,” StatPearls, 2022.
- Y. Wu, A. Kirillov, F. Massa, W.-Y. Lo, and R. Girshick, “Detectron2,” https://github.com/facebookresearch/detectron2, 2019.
- S. Ren, K. He, R. B. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” CoRR, vol. abs/1506.01497, 2015. [Online]. Available: http://arxiv.org/abs/1506.01497
- A. Buslaev, V. I. Iglovikov, E. Khvedchenya, A. Parinov, M. Druzhinin, and A. A. Kalinin, “Albumentations: Fast and flexible image augmentations,” Information, vol. 11, p. 125, 2 2020.
- L. Buitinck, G. Louppe, M. Blondel, F. Pedregosa, A. Mueller, O. Grisel, V. Niculae, P. Prettenhofer, A. Gramfort, J. Grobler, R. Layton, J. VanderPlas, A. Joly, B. Holt, and G. Varoquaux, “Api design for machine learning software: experiences from the scikit-learn project,” 2013, pp. 108–122.