Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
157 tokens/sec
GPT-4o
43 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PE-MVCNet: Multi-view and Cross-modal Fusion Network for Pulmonary Embolism Prediction (2402.17187v3)

Published 27 Feb 2024 in eess.IV and cs.CV

Abstract: The early detection of a pulmonary embolism (PE) is critical for enhancing patient survival rates. Both image-based and non-image-based features are of utmost importance in medical classification tasks. In a clinical setting, physicians tend to rely on the contextual information provided by Electronic Medical Records (EMR) to interpret medical imaging. However, very few models effectively integrate clinical information with imaging data. To address this shortcoming, we suggest a multimodal fusion methodology, termed PE-MVCNet, which capitalizes on Computed Tomography Pulmonary Angiography imaging and EMR data. This method comprises the Image-only module with an integrated multi-view block, the EMR-only module, and the Cross-modal Attention Fusion (CMAF) module. These modules cooperate to extract comprehensive features that subsequently generate predictions for PE. We conducted experiments using the publicly accessible Stanford University Medical Center dataset, achieving an AUROC of 94.1%, an accuracy rate of 90.2%, and an F1 score of 90.6%. Our proposed model outperforms existing methodologies, corroborating that our multimodal fusion model excels compared to models that use a single data modality. Our source code is available at https://github.com/LeavingStarW/PE-MVCNET.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (13)
  1. H. Khachnaoui, M. Agrébi, S. Halouani, and N. Khlifa, “Deep learning for automatic pulmonary embolism identification using cta images,” in 2022 6th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP).   IEEE, 2022, pp. 1–6.
  2. P. A. Grenier, A. Ayobi, S. Quenet, M. Tassy, M. Marx, D. S. Chow, B. D. Weinberg, P. D. Chang, and Y. Chaibi, “Deep learning-based algorithm for automatic detection of pulmonary embolism in chest ct angiograms,” Diagnostics, vol. 13, no. 7, p. 1324, 2023.
  3. Y. Chen, B. Zou, Z. Guo, Y. Huang, Y. Huang, F. Qin, Q. Li, and C. Wang, “Scunet++: Swin-unet and cnn bottleneck hybrid architecture with multi-fusion dense skip connection for pulmonary embolism ct image segmentation,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2024, pp. 7759–7767.
  4. S. Suman, G. Singh, N. Sakla, R. Gattu, J. Green, T. Phatak, D. Samaras, and P. Prasanna, “Attention based cnn-lstm network for pulmonary embolism prediction on chest computed tomography pulmonary angiograms,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part VII 24.   Springer, 2021, pp. 356–366.
  5. H. Huhtanen, M. Nyman, T. Mohsen, A. Virkki, A. Karlsson, and J. Hirvonen, “Automated detection of pulmonary embolism from ct-angiograms using deep learning,” BMC Medical Imaging, vol. 22, no. 1, p. 43, 2022.
  6. L. Shi, D. Rajan, S. Abedin, M. S. Yellapragada, D. Beymer, and E. Dehghan, “Automatic diagnosis of pulmonary embolism using an attention-guided framework: A large-scale study,” in Medical Imaging with Deep Learning.   PMLR, 2020, pp. 743–754.
  7. S.-C. Huang, T. Kothari, I. Banerjee, C. Chute, R. L. Ball, N. Borus, A. Huang, B. N. Patel, P. Rajpurkar, J. Irvin et al., “Penet—a scalable deep-learning model for automated diagnosis of pulmonary embolism using volumetric ct imaging,” NPJ digital medicine, vol. 3, no. 1, p. 61, 2020.
  8. W. Tang, F. He, Y. Liu, and Y. Duan, “Matr: Multimodal medical image fusion via multiscale adaptive transformer,” IEEE Transactions on Image Processing, vol. 31, pp. 5134–5149, 2022.
  9. Y. Zhou, S.-C. Huang, J. A. Fries, A. Youssef, T. J. Amrhein, M. Chang, I. Banerjee, D. Rubin, L. Xing, N. Shah et al., “Radfusion: Benchmarking performance and fairness for multimodal pulmonary embolism detection from ct and ehr,” arXiv preprint arXiv:2111.11665, 2021.
  10. Q. Zhu, Y. Wang, X. Chu, X. Yang, and W. Zhong, “Multi-view coupled self-attention network for pulmonary nodules classification,” in Proceedings of the Asian Conference on Computer Vision, 2022, pp. 995–1009.
  11. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  12. S. Ö. Arik and T. Pfister, “Tabnet: Attentive interpretable tabular learning,” in Proceedings of the AAAI conference on artificial intelligence, vol. 35, no. 8, 2021, pp. 6679–6687.
  13. X. Luo, X. Chen, X. He, L. Qing, and X. Tan, “Cmafgan: A cross-modal attention fusion based generative adversarial network for attribute word-to-face synthesis,” Knowledge-Based Systems, vol. 255, p. 109750, 2022.
Citations (1)

Summary

We haven't generated a summary for this paper yet.