Towards a clinically accessible radiology foundation model: open-access and lightweight, with automated evaluation (2403.08002v5)
Abstract: The scaling laws and extraordinary performance of large foundation models motivate the development and utilization of such models in biomedicine. However, despite early promising results on some biomedical benchmarks, there are still major challenges that need to be addressed before these models can be used in real-world clinics. Frontier general-domain models such as GPT-4V still have significant performance gaps in multimodal biomedical applications. More importantly, less-acknowledged pragmatic issues, including accessibility, model cost, and tedious manual evaluation make it hard for clinicians to use state-of-the-art large models directly on private patient data. Here, we explore training open-source small multimodal models (SMMs) to bridge competency gaps for unmet clinical needs in radiology. To maximize data efficiency, we adopt a modular approach by incorporating state-of-the-art pre-trained models for image and text modalities, and focusing on training a lightweight adapter to ground each modality to the text embedding space, as exemplified by LLaVA-Med. For training, we assemble a large dataset of over 697 thousand radiology image-text pairs. For evaluation, we propose CheXprompt, a GPT-4-based metric for factuality evaluation, and demonstrate its parity with expert evaluation. For best practice, we conduct a systematic ablation study on various choices in data engineering and multimodal training. The resulting LlaVA-Rad (7B) model attains state-of-the-art results on standard radiology tasks such as report generation and cross-modal retrieval, even outperforming much larger models such as GPT-4V and Med-PaLM M (84B). The inference of LlaVA-Rad is fast and can be performed on a single V100 GPU in private settings, offering a promising state-of-the-art tool for real-world clinical applications.
- Radiology report generation using transformers conditioned with non-imaging data. In Medical Imaging 2023: Imaging Informatics for Healthcare, Research, and Applications, volume 12469, pages 146–154. SPIE, 2023.
- Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems, 35:23716–23736, 2022.
- Robust and efficient medical imaging with self-supervision. arXiv preprint arXiv:2205.09723, 2022.
- Big self-supervised models advance medical image classification. arXiv preprint arXiv:2101.05224, 2021.
- On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021.
- Padchest: A large chest x-ray image dataset with multi-label annotated reports. Medical image analysis, 66:101797, 2020.
- Generating radiology reports via memory-driven transformer. arXiv preprint arXiv:2010.16056, 2020.
- Chexagent: Towards a foundation model for chest x-ray interpretation. arXiv preprint arXiv:2401.12208, 2024.
- Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023.
- The future landscape of large language models in medicine. Communications Medicine, 3(1):141, 2023.
- Improving the factual correctness of radiology report generation with semantic rewards. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 4348–4360, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics.
- Qlora: Efficient finetuning of quantized llms, 2023.
- CANDID-PTX. Radiology: Artificial Intelligence, 6 2021.
- Chatgpt outperforms crowd-workers for text-annotation tasks. arXiv preprint arXiv:2303.15056, 2023.
- Textbooks are all you need, 2023.
- JF Healthcare. Object-cxr - automatic detection of foreign objects on chest x-rays. https://web.archive.org/web/20201127235812/https://jfhealthcare.github.io/object-CXR/.
- Mkcl: Medical knowledge with contrastive learning model for radiology report generation. Journal of Biomedical Informatics, 146:104496, 2023.
- LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations, 2022.
- Generative artificial intelligence for chest radiograph interpretation in the emergency department. JAMA network open, 6(10):e2336100–e2336100, 2023.
- Developing medical imaging ai for emerging infectious diseases. nature communications, 13(1):7060, 2022.
- Self-supervised learning for medical image classification: a systematic review and implementation guidelines. NPJ Digital Medicine, 6(1):74, 2023.
- Maira-1: A specialised large multimodal model for radiology report generation. arXiv preprint arXiv:2311.13668, 2023.
- Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 590–597, 2019.
- Radgraph: Extracting clinical entities and relations from radiology reports, 2021.
- Mimic-cxr, a de-identified publicly available database of chest radiographs with free-text reports. Scientific data, 6(1):317, 2019.
- Curtis P Langlotz. The future of ai and informatics in radiology: 10 predictions. Radiology, 309(1):e231114, 2023.
- Do we still need clinical language models? In Bobak J. Mortazavi, Tasmie Sarker, Andrew Beam, and Joyce C. Ho, editors, Proceedings of the Conference on Health, Inference, and Learning, volume 209 of Proceedings of Machine Learning Research, pages 578–597. PMLR, 22 Jun–24 Jun 2023.
- LLaVA-med: Training a large language-and-vision assistant for biomedicine in one day. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2023.
- Improved baselines with visual instruction tuning, 2023.
- Visual instruction tuning. In Thirty-seventh Conference on Neural Information Processing Systems, 2023.
- Gpteval: Nlg evaluation using gpt-4 with better human alignment. arXiv preprint arXiv:2303.16634, 2023.
- An empirical study of scaling instruct-tuned large multimodal models, 2023.
- Orca 2: Teaching small language models how to reason, 2023.
- Vision transformer and language model based radiology report generation. IEEE Access, 11:1814–1824, 2022.
- Foundation models for generalist medical artificial intelligence. Nature, 616(7956):259–265, 2023.
- Vinbigdata chest x-ray abnormalities detection. Kaggle Competition https://www. kaggle. com/c/vinbi gdatachest-xray-abnor malit ies-detec tion, 2020.
- Dinov2: Learning robust visual features without supervision, 2024.
- S3-net: A self-supervised dual-stream network for radiology report generation. IEEE Journal of Biomedical and Health Informatics, 2023.
- Improving radiology report generation systems by removing hallucinated references to non-existent priors. In Machine Learning for Health, pages 456–473. PMLR, 2022.
- Brax, brazilian labeled chest x-ray dataset. Scientific Data, 9(1):487, 2022.
- Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence, 4(10):867–878, 2022.
- Chexbert: Combining automatic labelers and expert annotations for accurate radiology report labeling using bert, 2020.
- Consensus, dissensus and synergy between clinicians and specialist foundation models in radiology report generation. arXiv preprint arXiv:2311.18260, 2023.
- Large language models in medicine. Nature medicine, 29(8):1930–1940, 2023.
- The ecological footprint of medical ai. European Radiology, pages 1–3, 2023.
- Towards generalist biomedical ai. NEJM AI, 1(3):AIoa2300138, 2024.
- Is chatgpt a good nlg evaluator? a preliminary study. arXiv preprint arXiv:2303.04048, 2023.
- Cross-modal prototype driven network for radiology report generation. In European Conference on Computer Vision, pages 563–579. Springer, 2022.
- Attention-based saliency maps improve interpretability of pneumothorax classification. Radiology: Artificial Intelligence, 5(2):e220187, 2023.
- The shaky foundations of large language models and foundation models for electronic health records. npj Digital Medicine, 6(1):135, 2023.
- Evaluating progress in automatic chest x-ray radiology report generation. Patterns, 4(9), 2023.
- Radiology report expert evaluation (rexval) dataset, 2023.
- Rales: a benchmark for radiology language evaluations. In A. Oh, T. Neumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems, volume 36, pages 74429–74454. Curran Associates, Inc., 2023.
- Revisiting the trustworthiness of saliency methods in radiology ai. Radiology: Artificial Intelligence, 6(1):e220221, 2024. PMID: 38166328.
- Biomedclip: a multimodal biomedical foundation model pretrained from fifteen million scientific image-text pairs, 2024.
- Optimizing the factual correctness of a summary: A study of summarizing radiology reports. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5108–5120, 2020.