Multimodal Large Language Models for Medical Report Generation via Customized Prompt Tuning (2506.15477v1)
Abstract: Medical report generation from imaging data remains a challenging task in clinical practice. While LLMs show great promise in addressing this challenge, their effective integration with medical imaging data still deserves in-depth exploration. In this paper, we present MRG-LLM, a novel multimodal LLM (MLLM) that combines a frozen LLM with a learnable visual encoder and introduces a dynamic prompt customization mechanism. Our key innovation lies in generating instance-specific prompts tailored to individual medical images through conditional affine transformations derived from visual features. We propose two implementations: prompt-wise and promptbook-wise customization, enabling precise and targeted report generation. Extensive experiments on IU X-ray and MIMIC-CXR datasets demonstrate that MRG-LLM achieves state-of-the-art performance in medical report generation. Our code will be made publicly available.
- Chunlei Li (51 papers)
- Jingyang Hou (1 paper)
- Yilei Shi (53 papers)
- Jingliang Hu (14 papers)
- Xiao Xiang Zhu (201 papers)
- Lichao Mou (50 papers)