Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Consensus, dissensus and synergy between clinicians and specialist foundation models in radiology report generation (2311.18260v3)

Published 30 Nov 2023 in eess.IV, cs.CL, cs.CV, and cs.LG

Abstract: Radiology reports are an instrumental part of modern medicine, informing key clinical decisions such as diagnosis and treatment. The worldwide shortage of radiologists, however, restricts access to expert care and imposes heavy workloads, contributing to avoidable errors and delays in report delivery. While recent progress in automated report generation with vision-LLMs offer clear potential in ameliorating the situation, the path to real-world adoption has been stymied by the challenge of evaluating the clinical quality of AI-generated reports. In this study, we build a state-of-the-art report generation system for chest radiographs, $\textit{Flamingo-CXR}$, by fine-tuning a well-known vision-language foundation model on radiology data. To evaluate the quality of the AI-generated reports, a group of 16 certified radiologists provide detailed evaluations of AI-generated and human written reports for chest X-rays from an intensive care setting in the United States and an inpatient setting in India. At least one radiologist (out of two per case) preferred the AI report to the ground truth report in over 60$\%$ of cases for both datasets. Amongst the subset of AI-generated reports that contain errors, the most frequently cited reasons were related to the location and finding, whereas for human written reports, most mistakes were related to severity and finding. This disparity suggested potential complementarity between our AI system and human experts, prompting us to develop an assistive scenario in which Flamingo-CXR generates a first-draft report, which is subsequently revised by a clinician. This is the first demonstration of clinician-AI collaboration for report writing, and the resultant reports are assessed to be equivalent or preferred by at least one radiologist to reports written by experts alone in 80$\%$ of in-patient cases and 60$\%$ of intensive care cases.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (26)
  1. Ryutaro Tanno (36 papers)
  2. David G. T. Barrett (16 papers)
  3. Andrew Sellergren (8 papers)
  4. Sumedh Ghaisas (2 papers)
  5. Sumanth Dathathri (14 papers)
  6. Abigail See (9 papers)
  7. Johannes Welbl (20 papers)
  8. Karan Singhal (26 papers)
  9. Shekoofeh Azizi (23 papers)
  10. Tao Tu (45 papers)
  11. Mike Schaekermann (20 papers)
  12. Rhys May (3 papers)
  13. Roy Lee (2 papers)
  14. SiWai Man (2 papers)
  15. Zahra Ahmed (2 papers)
  16. Sara Mahdavi (2 papers)
  17. Danielle Belgrave (6 papers)
  18. Vivek Natarajan (40 papers)
  19. Shravya Shetty (21 papers)
  20. Pushmeet Kohli (116 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.