Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multimodal self-supervised learning for lesion localization (2401.01524v3)

Published 3 Jan 2024 in cs.CV

Abstract: Multimodal deep learning utilizing imaging and diagnostic reports has made impressive progress in the field of medical imaging diagnostics, demonstrating a particularly strong capability for auxiliary diagnosis in cases where sufficient annotation information is lacking. Nonetheless, localizing diseases accurately without detailed positional annotations remains a challenge. Although existing methods have attempted to utilize local information to achieve fine-grained semantic alignment, their capability in extracting the fine-grained semantics of the comprehensive context within reports is limited. To address this problem, a new method is introduced that takes full sentences from textual reports as the basic units for local semantic alignment. This approach combines chest X-ray images with their corresponding textual reports, performing contrastive learning at both global and local levels. The leading results obtained by this method on multiple datasets confirm its efficacy in the task of lesion localization.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (14)
  1. “Improving factual completeness and consistency of image-to-text radiology report generation,” arXiv preprint arXiv:2010.10042, 2020.
  2. “Dermatologist-level classification of skin cancer with deep neural networks,” nature, vol. 542, no. 7639, pp. 115–118, 2017.
  3. “Automated deep-neural-network surveillance of cranial images for acute neurologic events,” Nature medicine, vol. 24, no. 9, pp. 1337–1341, 2018.
  4. “Advancing radiograph representation learning with masked record modeling,” arXiv preprint arXiv:2301.13155, 2023.
  5. “Medklip: Medical knowledge enhanced language-image pre-training,” medRxiv, pp. 2023–01, 2023.
  6. “Gloria: A multimodal global-local representation learning framework for label-efficient medical image recognition,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 3942–3951.
  7. “Benchmarking saliency methods for chest x-ray interpretation,” Nature Machine Intelligence, vol. 4, no. 10, pp. 867–878, 2022.
  8. “Contrastive learning of medical visual representations from paired images and text,” in Machine Learning for Healthcare Conference. PMLR, 2022, pp. 2–25.
  9. “Making the most of text semantics to improve biomedical vision–language processing,” in European conference on computer vision. Springer, 2022, pp. 1–21.
  10. “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  11. “Publicly available clinical bert embeddings,” arXiv preprint arXiv:1904.03323, 2019.
  12. “Mimic-cxr-jpg, a large publicly available database of labeled chest radiographs,” arXiv preprint arXiv:1901.07042, 2019.
  13. “Augmenting the national institutes of health chest radiograph dataset with expert annotations of possible pneumonia,” Radiology: Artificial Intelligence, vol. 1, no. 1, pp. e180041, 2019.
  14. “Chest imaging representing a covid-19 positive rural us population,” Scientific data, vol. 7, no. 1, pp. 414, 2020.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Hao Yang (328 papers)
  2. Hong-Yu Zhou (50 papers)
  3. Cheng Li (1094 papers)
  4. Weijian Huang (19 papers)
  5. Jiarun Liu (17 papers)
  6. Yong Liang (32 papers)
  7. Shanshan Wang (166 papers)
  8. Guangming Shi (87 papers)
  9. Hairong Zheng (71 papers)
  10. Qiegen Liu (67 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com