A Systematic Evaluation of GPT-4V's Multimodal Capability for Medical Image Analysis (2310.20381v5)
Abstract: This work conducts an evaluation of GPT-4V's multimodal capability for medical image analysis, with a focus on three representative tasks of radiology report generation, medical visual question answering, and medical visual grounding. For the evaluation, a set of prompts is designed for each task to induce the corresponding capability of GPT-4V to produce sufficiently good outputs. Three evaluation ways including quantitative analysis, human evaluation, and case study are employed to achieve an in-depth and extensive evaluation. Our evaluation shows that GPT-4V excels in understanding medical images and is able to generate high-quality radiology reports and effectively answer questions about medical images. Meanwhile, it is found that its performance for medical visual grounding needs to be substantially improved. In addition, we observe the discrepancy between the evaluation outcome from quantitative analysis and that from human evaluation. This discrepancy suggests the limitations of conventional metrics in assessing the performance of LLMs like GPT-4V and the necessity of developing new metrics for automatic quantitative analysis.
- Yingshu Li (11 papers)
- Yunyi Liu (10 papers)
- Zhanyu Wang (22 papers)
- Xinyu Liang (11 papers)
- Lingqiao Liu (113 papers)
- Lei Wang (975 papers)
- Leyang Cui (50 papers)
- Zhaopeng Tu (135 papers)
- Longyue Wang (87 papers)
- Luping Zhou (72 papers)