MiniGPT-Med: Large Language Model as a General Interface for Radiology Diagnosis (2407.04106v1)
Abstract: Recent advancements in AI have precipitated significant breakthroughs in healthcare, particularly in refining diagnostic procedures. However, previous studies have often been constrained to limited functionalities. This study introduces MiniGPT-Med, a vision-LLM derived from large-scale LLMs and tailored for medical applications. MiniGPT-Med demonstrates remarkable versatility across various imaging modalities, including X-rays, CT scans, and MRIs, enhancing its utility. The model is capable of performing tasks such as medical report generation, visual question answering (VQA), and disease identification within medical imagery. Its integrated processing of both image and textual clinical data markedly improves diagnostic accuracy. Our empirical assessments confirm MiniGPT-Med's superior performance in disease grounding, medical report generation, and VQA benchmarks, representing a significant step towards reducing the gap in assisting radiology practice. Furthermore, it achieves state-of-the-art performance on medical report generation, higher than the previous best model by 19\% accuracy. MiniGPT-Med promises to become a general interface for radiology diagnoses, enhancing diagnostic efficiency across a wide range of medical imaging applications.
- Asma Alkhaldi (1 paper)
- Raneem Alnajim (2 papers)
- Layan Alabdullatef (1 paper)
- Rawan Alyahya (1 paper)
- Jun Chen (374 papers)
- Deyao Zhu (16 papers)
- Ahmed Alsinan (1 paper)
- Mohamed Elhoseiny (102 papers)