Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Anatomical Structure-Guided Medical Vision-Language Pre-training (2403.09294v1)

Published 14 Mar 2024 in cs.CV and cs.CL

Abstract: Learning medical visual representations through vision-language pre-training has reached remarkable progress. Despite the promising performance, it still faces challenges, i.e., local alignment lacks interpretability and clinical relevance, and the insufficient internal and external representation learning of image-report pairs. To address these issues, we propose an Anatomical Structure-Guided (ASG) framework. Specifically, we parse raw reports into triplets <anatomical region, finding, existence>, and fully utilize each element as supervision to enhance representation learning. For anatomical region, we design an automatic anatomical region-sentence alignment paradigm in collaboration with radiologists, considering them as the minimum semantic units to explore fine-grained local alignment. For finding and existence, we regard them as image tags, applying an image-tag recognition decoder to associate image features with their respective tags within each sample and constructing soft labels for contrastive learning to improve the semantic association of different image-report pairs. We evaluate the proposed ASG framework on two downstream tasks, including five public benchmarks. Experimental results demonstrate that our method outperforms the state-of-the-art methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Qingqiu Li (11 papers)
  2. Xiaohan Yan (14 papers)
  3. Jilan Xu (32 papers)
  4. Runtian Yuan (6 papers)
  5. Yuejie Zhang (31 papers)
  6. Rui Feng (67 papers)
  7. Quanli Shen (10 papers)
  8. Xiaobo Zhang (19 papers)
  9. Shujun Wang (46 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.