Multimodal self-supervised learning for lesion localization (2401.01524v3)
Abstract: Multimodal deep learning utilizing imaging and diagnostic reports has made impressive progress in the field of medical imaging diagnostics, demonstrating a particularly strong capability for auxiliary diagnosis in cases where sufficient annotation information is lacking. Nonetheless, localizing diseases accurately without detailed positional annotations remains a challenge. Although existing methods have attempted to utilize local information to achieve fine-grained semantic alignment, their capability in extracting the fine-grained semantics of the comprehensive context within reports is limited. To address this problem, a new method is introduced that takes full sentences from textual reports as the basic units for local semantic alignment. This approach combines chest X-ray images with their corresponding textual reports, performing contrastive learning at both global and local levels. The leading results obtained by this method on multiple datasets confirm its efficacy in the task of lesion localization.
- “Improving factual completeness and consistency of image-to-text radiology report generation,” arXiv preprint arXiv:2010.10042, 2020.
- “Dermatologist-level classification of skin cancer with deep neural networks,” nature, vol. 542, no. 7639, pp. 115–118, 2017.
- “Automated deep-neural-network surveillance of cranial images for acute neurologic events,” Nature medicine, vol. 24, no. 9, pp. 1337–1341, 2018.
- “Advancing radiograph representation learning with masked record modeling,” arXiv preprint arXiv:2301.13155, 2023.
- “Medklip: Medical knowledge enhanced language-image pre-training,” medRxiv, pp. 2023–01, 2023.
- “Gloria: A multimodal global-local representation learning framework for label-efficient medical image recognition,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 3942–3951.
- “Benchmarking saliency methods for chest x-ray interpretation,” Nature Machine Intelligence, vol. 4, no. 10, pp. 867–878, 2022.
- “Contrastive learning of medical visual representations from paired images and text,” in Machine Learning for Healthcare Conference. PMLR, 2022, pp. 2–25.
- “Making the most of text semantics to improve biomedical vision–language processing,” in European conference on computer vision. Springer, 2022, pp. 1–21.
- “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
- “Publicly available clinical bert embeddings,” arXiv preprint arXiv:1904.03323, 2019.
- “Mimic-cxr-jpg, a large publicly available database of labeled chest radiographs,” arXiv preprint arXiv:1901.07042, 2019.
- “Augmenting the national institutes of health chest radiograph dataset with expert annotations of possible pneumonia,” Radiology: Artificial Intelligence, vol. 1, no. 1, pp. e180041, 2019.
- “Chest imaging representing a covid-19 positive rural us population,” Scientific data, vol. 7, no. 1, pp. 414, 2020.
- Hao Yang (328 papers)
- Hong-Yu Zhou (50 papers)
- Cheng Li (1094 papers)
- Weijian Huang (19 papers)
- Jiarun Liu (17 papers)
- Yong Liang (32 papers)
- Shanshan Wang (166 papers)
- Guangming Shi (87 papers)
- Hairong Zheng (71 papers)
- Qiegen Liu (67 papers)