Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

BiomedCLIP: a multimodal biomedical foundation model pretrained from fifteen million scientific image-text pairs (2303.00915v2)

Published 2 Mar 2023 in cs.CV and cs.CL

Abstract: Biomedical data is inherently multimodal, comprising physical measurements and natural language narratives. A generalist biomedical AI model needs to simultaneously process different modalities of data, including text and images. Therefore, training an effective generalist biomedical model requires high-quality multimodal data, such as parallel image-text pairs. Here, we present PMC-15M, a novel dataset that is two orders of magnitude larger than existing biomedical multimodal datasets such as MIMIC-CXR, and spans a diverse range of biomedical image types. PMC-15M contains 15 million biomedical image-text pairs collected from 4.4 million scientific articles. Based on PMC-15M, we have pretrained BiomedCLIP, a multimodal foundation model, with domain-specific adaptations tailored to biomedical vision-language processing. We conducted extensive experiments and ablation studies on standard biomedical imaging tasks from retrieval to classification to visual question-answering (VQA). BiomedCLIP achieved new state-of-the-art results in a wide range of standard datasets, substantially outperforming prior approaches. Intriguingly, by large-scale pretraining on diverse biomedical image types, BiomedCLIP even outperforms state-of-the-art radiology-specific models such as BioViL in radiology-specific tasks such as RSNA pneumonia detection. In summary, BiomedCLIP is a fully open-access foundation model that achieves state-of-the-art performance on various biomedical tasks, paving the way for transformative multimodal biomedical discovery and applications. We release our models at https://aka.ms/biomedclip to facilitate future research in multimodal biomedical AI.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (21)
  1. Sheng Zhang (212 papers)
  2. Yanbo Xu (14 papers)
  3. Naoto Usuyama (22 papers)
  4. Jaspreet Bagga (1 paper)
  5. Robert Tinn (6 papers)
  6. Sam Preston (5 papers)
  7. Rajesh Rao (5 papers)
  8. Mu Wei (11 papers)
  9. Naveen Valluri (3 papers)
  10. Cliff Wong (14 papers)
  11. Matthew P. Lungren (43 papers)
  12. Tristan Naumann (41 papers)
  13. Hoifung Poon (61 papers)
  14. Hanwen Xu (16 papers)
  15. Andrea Tupini (3 papers)
  16. Yu Wang (939 papers)
  17. Matt Mazzola (3 papers)
  18. Swadheen Shukla (4 papers)
  19. Lars Liden (12 papers)
  20. Jianfeng Gao (344 papers)
Citations (138)