Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Med-Flamingo: a Multimodal Medical Few-shot Learner (2307.15189v1)

Published 27 Jul 2023 in cs.CV and cs.AI

Abstract: Medicine, by its nature, is a multifaceted domain that requires the synthesis of information across various modalities. Medical generative vision-LLMs (VLMs) make a first step in this direction and promise many exciting clinical applications. However, existing models typically have to be fine-tuned on sizeable down-stream datasets, which poses a significant limitation as in many medical applications data is scarce, necessitating models that are capable of learning from few examples in real-time. Here we propose Med-Flamingo, a multimodal few-shot learner adapted to the medical domain. Based on OpenFlamingo-9B, we continue pre-training on paired and interleaved medical image-text data from publications and textbooks. Med-Flamingo unlocks few-shot generative medical visual question answering (VQA) abilities, which we evaluate on several datasets including a novel challenging open-ended VQA dataset of visual USMLE-style problems. Furthermore, we conduct the first human evaluation for generative medical VQA where physicians review the problems and blinded generations in an interactive app. Med-Flamingo improves performance in generative medical VQA by up to 20\% in clinician's rating and firstly enables multimodal medical few-shot adaptations, such as rationale generation. We release our model, code, and evaluation app under https://github.com/snap-stanford/med-flamingo.

Med-Flamingo: A Multimodal Medical Few-shot Learner

The paper presents "Med-Flamingo," a novel vision-LLM (VLM) designed explicitly for medical applications. This research aims to address the limitations of existing medical models, which often require large downstream datasets for fine-tuning—a particular challenge in the medical domain, where data is frequently scarce.

Med-Flamingo builds on OpenFlamingo-9B, undergoing further pre-training on a curated dataset of paired and interleaved medical image-text data from reputable sources such as publications and textbooks. This approach not only expands the model's capacity to perform multimodal few-shot learning but also broadens the potential applications in clinical settings.

Key Contributions

  1. Multimodal Few-shot Learning: Med-Flamingo is the first model to integrate multimodal few-shot capabilities specifically adapted for medical contexts. It enables nuanced tasks like visual question answering (VQA) and rationale generation.
  2. Curated Medical Training Dataset: Leveraging over 4,000 medical textbooks, researchers created a comprehensive multimodal dataset. The effort ensures reliability and accuracy, addressing concerns regarding data sourced from potentially unreliable web content.
  3. Evaluation on Diverse Datasets: The model's capabilities are evaluated across multiple datasets, including a newly developed Visual USMLE dataset. This dataset is significant for its inclusion of complex, multidisciplinary problems augmented with visual and contextually rich information.
  4. Human Evaluation Protocol: The paper includes a comprehensive human evaluation of generative VQA outputs by clinical experts, providing a more realistic assessment of model performance compared to automated metrics.

Results

  • Med-Flamingo demonstrates up to a 20% improvement in clinical evaluation scores over existing models across datasets such as VQA-RAD and PathVQA.
  • It shows strong potential for generating open-ended answers and explanations, a capability not prevalent in prior medical VLMs.
  • The model ranks as the most preferred by clinicians for generating accurate and useful medical VQA answers.

Discussion

The implications of Med-Flamingo's success are multifaceted. Primarily, it points to a shift towards more adaptive and versatile AI tools in medical settings. By reducing reliance on extensive data labels and enabling better few-shot learning, Med-Flamingo sets the groundwork for future generalist medical models. These could revolutionize AI applications in healthcare by providing more nuanced context-aware responses and enhancing human-AI collaboration through detailed rationales.

However, existing limitations, such as potential hallucinations and the requirement for large-scale training, highlight areas for further research. Future studies could expand the model's capacity by integrating more varied clinical data or emphasizing advanced alignment techniques, such as preference tuning. This progression could lead to models that are not only accurate but also effectively grounded in medical knowledge, alleviating operational risks in real-world applications.

In summary, Med-Flamingo represents a significant advancement in the creation of medical AI systems, aligning with the ongoing trajectory toward developing sophisticated, adaptable, and reliable multimodal medical models. The release of the model and resources on GitHub further encourages continued exploration and development in this critical domain.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Michael Moor (20 papers)
  2. Qian Huang (55 papers)
  3. Shirley Wu (12 papers)
  4. Michihiro Yasunaga (48 papers)
  5. Cyril Zakka (14 papers)
  6. Yash Dalmia (1 paper)
  7. Eduardo Pontes Reis (8 papers)
  8. Pranav Rajpurkar (69 papers)
  9. Jure Leskovec (233 papers)
Citations (160)
Github Logo Streamline Icon: https://streamlinehq.com