Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Automated Classification of Body MRI Sequence Type Using Convolutional Neural Networks (2402.08098v1)

Published 12 Feb 2024 in eess.IV and cs.CV

Abstract: Multi-parametric MRI of the body is routinely acquired for the identification of abnormalities and diagnosis of diseases. However, a standard naming convention for the MRI protocols and associated sequences does not exist due to wide variations in imaging practice at institutions and myriad MRI scanners from various manufacturers being used for imaging. The intensity distributions of MRI sequences differ widely as a result, and there also exists information conflicts related to the sequence type in the DICOM headers. At present, clinician oversight is necessary to ensure that the correct sequence is being read and used for diagnosis. This poses a challenge when specific series need to be considered for building a cohort for a large clinical study or for developing AI algorithms. In order to reduce clinician oversight and ensure the validity of the DICOM headers, we propose an automated method to classify the 3D MRI sequence acquired at the levels of the chest, abdomen, and pelvis. In our pilot work, our 3D DenseNet-121 model achieved an F1 score of 99.5% at differentiating 5 common MRI sequences obtained by three Siemens scanners (Aera, Verio, Biograph mMR). To the best of our knowledge, we are the first to develop an automated method for the 3D classification of MRI sequences in the chest, abdomen, and pelvis, and our work has outperformed the previous state-of-the-art MRI series classifiers.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (11)
  1. Anand, A. et al., “Automated classification of intravenous contrast enhancement phase of ct scans using residual networks,” SPIE Medical Imaging 2023 12465, 124650O (2023).
  2. Schirmer, M. D., Dalca, A. V., Sridharan, R., Giese, A.-K., Donahue, K. L., Nardin, M. J., Mocking, S. J., McIntosh, E. C., Frid, P., Wasselius, J., Cole, J. W., Holmegaard, L., Jern, C., Jimenez-Conde, J., Lemmens, R., Lindgren, A. G., Meschia, J. F., Roquer, J., Rundek, T., Sacco, R. L., Schmidt, R., Sharma, P., Slowik, A., Thijs, V., Woo, D., Vagal, A., Xu, H., Kittner, S. J., McArdle, P. F., Mitchell, B. D., Rosand, J., Worrall, B. B., Wu, O., Golland, P., and Rost, N. S., “White matter hyperintensity quantification in large-scale clinical acute ischemic stroke cohorts – the mri-genie study,” NeuroImage: Clinical 23, 101884 (2019).
  3. Liang, S. et al., “Magnetic resonance imaging seuqnce identification using a metadata learning approach,” Frontiers in Neuroinformatics 15 (2021).
  4. Ranjbar, S. et al., “A deep convolutioanl neural network for annotation of magnetic resonance imaging seuqence type,” Journal of Digital Imaging 33, 439–446 (2020).
  5. Vieira de Mello, J. P., Paixão, T. M., Berriel, R., Reyes, M., Badue, C., De Souza, A. F., and Oliveira-Santos, T., “Deep learning-based type identification of volumetric mri sequences,” in [2020 25th International Conference on Pattern Recognition (ICPR) ], 1–8 (2021).
  6. Noguchi, T. et al., “Artificial intelligence using neural network architecture for adiology (ainnar): classification of mr imaging sequences,” Japanese Journal of Radiology 36, 691–697 (2018).
  7. Kociołek, M., Strzelecki, M., and Obuchowicz, R., “Does image normalization and intensity resolution impact texture classification?,” Computerized Medical Imaging and Graphics 81, 101716 (2020).
  8. Menze, B.H., Jakab, A., Bauer, S., Kalpathy-Cramer, J., Farahani, K., Kirby, J. et al., “The multimodal brain tumor image segmentation benchmark (BRATS),” IEEE Transactions on Medical Imaging 34(10) (2015).
  9. He K., Zhang X., Ren S., Sun J., “Deep residual learning for image recognition,” in [IEEE Conference on Computer Vision and Pattern Recognition ], (June 2016).
  10. Huang, G., Liu, Z., van der Maaten, L., and Weinberger, K. Q., “Densely connected convolutional networks,” in [Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) ], (July 2017).
  11. Cardoso, M. J., Li, W., Brown, R., Ma, N., Kerfoot, E., Wang, Y., Murrey, B., Myronenko, A., Zhao, C., Yang, D., Nath, V., He, Y., Xu, Z., Hatamizadeh, A., Myronenko, A., Zhu, W., Liu, Y., Zheng, M., Tang, Y., Yang, I., Zephyr, M., Hashemian, B., Alle, S., Darestani, M. Z., Budd, C., Modat, M., Vercauteren, T., Wang, G., Li, Y., Hu, Y., Fu, Y., Gorman, B., Johnson, H., Genereaux, B., Erdal, B. S., Gupta, V., Diaz-Pinto, A., Dourson, A., Maier-Hein, L., Jaeger, P. F., Baumgartner, M., Kalpathy-Cramer, J., Flores, M., Kirby, J., Cooper, L. A. D., Roth, H. R., Xu, D., Bericat, D., Floca, R., Zhou, S. K., Shuaib, H., Farahani, K., Maier-Hein, K. H., Aylward, S., Dogra, P., Ourselin, S., and Feng, A., “Monai: An open-source framework for deep learning in healthcare,” (2022).
Citations (4)

Summary

We haven't generated a summary for this paper yet.