Radiology-Llama2: Best-in-Class Large Language Model for Radiology (2309.06419v1)
Abstract: This paper introduces Radiology-Llama2, a LLM specialized for radiology through a process known as instruction tuning. Radiology-Llama2 is based on the Llama2 architecture and further trained on a large dataset of radiology reports to generate coherent and clinically useful impressions from radiological findings. Quantitative evaluations using ROUGE metrics on the MIMIC-CXR and OpenI datasets demonstrate that Radiology-Llama2 achieves state-of-the-art performance compared to other generative LLMs, with a Rouge-1 score of 0.4834 on MIMIC-CXR and 0.4185 on OpenI. Additional assessments by radiology experts highlight the model's strengths in understandability, coherence, relevance, conciseness, and clinical utility. The work illustrates the potential of localized LLMs designed and tuned for specialized domains like radiology. When properly evaluated and deployed, such models can transform fields like radiology by automating rote tasks and enhancing human expertise.
- Zhengliang Liu (91 papers)
- Yiwei Li (107 papers)
- Peng Shu (34 papers)
- Aoxiao Zhong (16 papers)
- Longtao Yang (4 papers)
- Chao Ju (7 papers)
- Zihao Wu (100 papers)
- Chong Ma (28 papers)
- Jie Luo (100 papers)
- Cheng Chen (262 papers)
- Sekeun Kim (15 papers)
- Jiang Hu (84 papers)
- Haixing Dai (39 papers)
- Lin Zhao (228 papers)
- Dajiang Zhu (68 papers)
- Jun Liu (606 papers)
- Wei Liu (1135 papers)
- Dinggang Shen (153 papers)
- Tianming Liu (161 papers)
- Quanzheng Li (122 papers)