Exploring the Boundaries of GPT-4 in Radiology (2310.14573v1)
Abstract: The recent success of general-domain LLMs has significantly changed the natural language processing paradigm towards a unified foundation model across domains and applications. In this paper, we focus on assessing the performance of GPT-4, the most capable LLM so far, on the text-based applications for radiology reports, comparing against state-of-the-art (SOTA) radiology-specific models. Exploring various prompting strategies, we evaluated GPT-4 on a diverse range of common radiology tasks and we found GPT-4 either outperforms or is on par with current SOTA radiology models. With zero-shot prompting, GPT-4 already obtains substantial gains ($\approx$ 10% absolute improvement) over radiology models in temporal sentence similarity classification (accuracy) and natural language inference ($F_1$). For tasks that require learning dataset-specific style or schema (e.g. findings summarisation), GPT-4 improves with example-based prompting and matches supervised SOTA. Our extensive error analysis with a board-certified radiologist shows GPT-4 has a sufficient level of radiology knowledge with only occasional errors in complex context that require nuanced domain knowledge. For findings summarisation, GPT-4 outputs are found to be overall comparable with existing manually-written impressions.
- Qianchu Liu (12 papers)
- Stephanie Hyland (9 papers)
- Shruthi Bannur (15 papers)
- Kenza Bouzid (9 papers)
- Daniel C. Castro (28 papers)
- Maria Teodora Wetscherek (6 papers)
- Robert Tinn (6 papers)
- Harshita Sharma (13 papers)
- Fernando Pérez-García (16 papers)
- Anton Schwaighofer (13 papers)
- Pranav Rajpurkar (69 papers)
- Sameer Tajdin Khanna (1 paper)
- Hoifung Poon (61 papers)
- Naoto Usuyama (22 papers)
- Anja Thieme (7 papers)
- Aditya V. Nori (8 papers)
- Matthew P. Lungren (43 papers)
- Ozan Oktay (34 papers)
- Javier Alvarez-Valle (19 papers)