Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards a Holistic Framework for Multimodal Large Language Models in Three-dimensional Brain CT Report Generation (2407.02235v1)

Published 2 Jul 2024 in cs.CL

Abstract: Multi-modal LLMs (MLLMs) have been given free rein to explore exciting medical applications with a primary focus on radiology report generation. Nevertheless, the preliminary success in 2D radiology captioning is incompetent to reflect the real-world diagnostic challenge in the volumetric 3D anatomy. To mitigate three crucial limitation aspects in the existing literature, including (1) data complexity, (2) model capacity, and (3) evaluation metric fidelity, we collected an 18,885 text-scan pairs 3D-BrainCT dataset and applied clinical visual instruction tuning (CVIT) to train BrainGPT models to generate radiology-adherent 3D brain CT reports. Statistically, our BrainGPT scored BLEU-1 = 44.35, BLEU-4 = 20.38, METEOR = 30.13, ROUGE-L = 47.6, and CIDEr-R = 211.77 during internal testing and demonstrated an accuracy of 0.91 in captioning midline shifts on the external validation CQ500 dataset. By further inspecting the captioned report, we reported that the traditional metrics appeared to measure only the surface text similarity and failed to gauge the information density of the diagnostic purpose. To close this gap, we proposed a novel Feature-Oriented Radiology Task Evaluation (FORTE) to estimate the report's clinical relevance (lesion feature and landmarks). Notably, the BrainGPT model scored an average FORTE F1-score of 0.71 (degree=0.661; landmark=0.706; feature=0.693; impression=0.779). To demonstrate that BrainGPT models possess objective readiness to generate human-like radiology reports, we conducted a Turing test that enrolled 11 physician evaluators, and around 74% of the BrainGPT-generated captions were indistinguishable from those written by humans. Our work embodies a holistic framework that showcased the first-hand experience of curating a 3D brain CT dataset, fine-tuning anatomy-sensible LLMs, and proposing robust radiology evaluation metrics.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Cheng-Yi Li (3 papers)
  2. Kao-Jung Chang (3 papers)
  3. Cheng-Fu Yang (10 papers)
  4. Hsin-Yu Wu (8 papers)
  5. Wenting Chen (26 papers)
  6. Hritik Bansal (38 papers)
  7. Ling Chen (144 papers)
  8. Yi-Ping Yang (1 paper)
  9. Yu-Chun Chen (5 papers)
  10. Shih-Pin Chen (1 paper)
  11. Jiing-Feng Lirng (1 paper)
  12. Kai-Wei Chang (292 papers)
  13. Shih-Hwa Chiou (1 paper)
Citations (2)