Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
143 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Detecting Bone Lesions in X-Ray Under Diverse Acquisition Conditions (2212.07792v2)

Published 15 Dec 2022 in eess.IV

Abstract: The diagnosis of primary bone tumors is challenging, as the initial complaints are often non-specific. Early detection of bone cancer is crucial for a favorable prognosis. Incidentally, lesions may be found on radiographs obtained for other reasons. However, these early indications are often missed. In this work, we propose an automatic algorithm to detect bone lesions in conventional radiographs to facilitate early diagnosis. Detecting lesions in such radiographs is challenging: first, the prevalence of bone cancer is very low; any method must show high precision to avoid a prohibitive number of false alarms. Second, radiographs taken in health maintenance organizations (HMOs) or emergency departments (EDs) suffer from inherent diversity due to different X-ray machines, technicians and imaging protocols. This diversity poses a major challenge to any automatic analysis method. We propose to train an off-the-shelf object detection algorithm to detect lesions in radiographs. The novelty of our approach stems from a dedicated preprocessing stage that directly addresses the diversity of the data. The preprocessing consists of self-supervised region-of-interest detection using vision transformer (ViT), and a foreground-based histogram equalization for contrast enhancement to relevant regions only. We evaluate our method via a retrospective study that analyzes bone tumors on radiographs acquired from January 2003 to December 2018 under diverse acquisition protocols. Our method obtains 82.43% sensitivity at 1.5% false-positive rate and surpasses existing preprocessing methods. For lesion detection, our method achieves 82.5% accuracy and an IoU of 0.69. The proposed preprocessing method enables to effectively cope with the inherent diversity of radiographs acquired in HMOs and EDs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)
  1. T. M. Jackson, M. Bittman, and L. Granowetter, “Pediatric malignant bone tumors: A review and update on current challenges, and emerging drug targets,” Current Problems in Pediatric and Adolescent Health Care, vol. 46, pp. 213–228, 7 2016.
  2. R. L. Siegel, K. D. Miller, H. E. Fuchs, and A. Jemal, “Cancer statistics, 2022,” CA: A Cancer Journal for Clinicians, vol. 72, pp. 7–33, 1 2022.
  3. M. Salom, C. Chiari, J. M. G. Alessandri, M. Willegger, R. Windhager, and I. Sanpera, “Diagnosis and staging of malignant bone tumours in children: What is due and what is new?” Journal of Children’s Orthopaedics, vol. 15, pp. 312–321, 8 2021.
  4. D. N. Mintz and S. Hwang, “Bone tumor imaging, then and now,” HSS Journal ®, vol. 10, pp. 230–239, 10 2014.
  5. X. Tang, “The role of artificial intelligence in medical imaging research,” BJR—Open, vol. 2, p. 20190031, 11 2020.
  6. D. Bell and A. Shetty, “X-ray artifacts,” 1 2014.
  7. T. S. Perry, “Andrew ng x-rays the ai hype,” IEEE Spectrum, 2021.
  8. Y. He, I. Pan, B. Bao, K. Halsey, M. Chang, H. Liu, S. Peng, R. A. Sebro, J. Guan, T. Yi, A. T. Delworth, F. Eweje, L. J. States, P. J. Zhang, Z. Zhang, J. Wu, X. Peng, and H. X. Bai, “Deep learning-based classification of primary bone tumors on radiographs: A preliminary study,” eBioMedicine, vol. 62, p. 103121, 12 2020.
  9. C. E. von Schacky, N. J. Wilhelm, V. S. Schäfer, Y. Leonhardt, F. G. Gassert, S. C. Foreman, F. T. Gassert, M. Jung, P. M. Jungmann, M. F. Russe, C. Mogler, C. Knebel, R. von Eisenhart-Rothe, M. R. Makowski, K. Woertler, R. Burgkart, and A. S. Gersing, “Multitask deep learning for segmentation and classification of primary bone tumors on radiographs,” Radiology, p. 204531, Sep. 2021. [Online]. Available: https://doi.org/10.1148/radiol.2021204531
  10. S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. ter Haar Romeny, J. B. Zimmerman, and K. Zuiderveld, “Adaptive histogram equalization and its variations,” Computer Vision, Graphics, and Image Processing, vol. 39, pp. 355–368, 9 1987.
  11. K. ZUIDERVELD, “Contrast limited adaptive histogram equalization,” Graphics Gems, vol. 0, pp. 474–485, 1994. [Online]. Available: https://cir.nii.ac.jp/crid/1571698601099987968
  12. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,” CoRR, vol. abs/2010.11929, 2020. [Online]. Available: https://arxiv.org/abs/2010.11929
  13. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” CoRR, vol. abs/1706.03762, 2017. [Online]. Available: http://arxiv.org/abs/1706.03762
  14. M. Caron, H. Touvron, I. Misra, H. Jégou, J. Mairal, P. Bojanowski, and A. Joulin, “Emerging properties in self-supervised vision transformers,” CoRR, vol. abs/2104.14294, 2021. [Online]. Available: https://arxiv.org/abs/2104.14294
  15. S. Amir, Y. Gandelsman, S. Bagon, and T. Dekel, “Deep vit features as dense visual descriptors,” arXiv preprint arXiv:2112.05814, 2021.
  16. B. D. T. A, “X-ray image acquisition,” StatPearls, 2022.
  17. Y. Wu, A. Kirillov, F. Massa, W.-Y. Lo, and R. Girshick, “Detectron2,” https://github.com/facebookresearch/detectron2, 2019.
  18. S. Ren, K. He, R. B. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” CoRR, vol. abs/1506.01497, 2015. [Online]. Available: http://arxiv.org/abs/1506.01497
  19. A. Buslaev, V. I. Iglovikov, E. Khvedchenya, A. Parinov, M. Druzhinin, and A. A. Kalinin, “Albumentations: Fast and flexible image augmentations,” Information, vol. 11, p. 125, 2 2020.
  20. L. Buitinck, G. Louppe, M. Blondel, F. Pedregosa, A. Mueller, O. Grisel, V. Niculae, P. Prettenhofer, A. Gramfort, J. Grobler, R. Layton, J. VanderPlas, A. Joly, B. Holt, and G. Varoquaux, “Api design for machine learning software: experiences from the scikit-learn project,” 2013, pp. 108–122.
Citations (2)

Summary

We haven't generated a summary for this paper yet.