3D-CT-GPT: Generating 3D Radiology Reports through Integration of Large Vision-Language Models (2409.19330v1)
Abstract: Medical image analysis is crucial in modern radiological diagnostics, especially given the exponential growth in medical imaging data. The demand for automated report generation systems has become increasingly urgent. While prior research has mainly focused on using machine learning and multimodal LLMs for 2D medical images, the generation of reports for 3D medical images has been less explored due to data scarcity and computational complexities. This paper introduces 3D-CT-GPT, a Visual Question Answering (VQA)-based medical visual LLM specifically designed for generating radiology reports from 3D CT scans, particularly chest CTs. Extensive experiments on both public and private datasets demonstrate that 3D-CT-GPT significantly outperforms existing methods in terms of report accuracy and quality. Although current methods are few, including the partially open-source CT2Rep and the open-source M3D, we ensured fair comparison through appropriate data conversion and evaluation methodologies. Experimental results indicate that 3D-CT-GPT enhances diagnostic accuracy and report coherence, establishing itself as a robust solution for clinical radiology report generation. Future work will focus on expanding the dataset and further optimizing the model to enhance its performance and applicability.
- Hao Chen (1006 papers)
- Wei Zhao (309 papers)
- Yingli Li (4 papers)
- Tianyang Zhong (19 papers)
- Yisong Wang (14 papers)
- Youlan Shang (2 papers)
- Lei Guo (110 papers)
- Junwei Han (87 papers)
- Tianming Liu (161 papers)
- Jun Liu (606 papers)
- Tuo Zhang (46 papers)