Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Study of Generative Large Language Model for Medical Research and Healthcare (2305.13523v1)

Published 22 May 2023 in cs.CL

Abstract: There is enormous enthusiasm and concerns in using LLMs in healthcare, yet current assumptions are all based on general-purpose LLMs such as ChatGPT. This study develops a clinical generative LLM, GatorTronGPT, using 277 billion words of mixed clinical and English text with a GPT-3 architecture of 20 billion parameters. GatorTronGPT improves biomedical natural language processing for medical research. Synthetic NLP models trained using GatorTronGPT generated text outperform NLP models trained using real-world clinical text. Physicians Turing test using 1 (worst) to 9 (best) scale shows that there is no significant difference in linguistic readability (p = 0.22; 6.57 of GatorTronGPT compared with 6.93 of human) and clinical relevance (p = 0.91; 7.0 of GatorTronGPT compared with 6.97 of human) and that physicians cannot differentiate them (p < 0.001). This study provides insights on the opportunities and challenges of LLMs for medical research and healthcare.

Generative LLM in Healthcare: Evaluations and Implications

The paper entitled "A Study of Generative LLM for Medical Research and Healthcare" presents the development and evaluation of a domain-specific generative LLM named GatorTronGPT, designed to enhance biomedical NLP tasks within the context of medical research and healthcare applications. Unlike widely used general-purpose models like ChatGPT, GatorTronGPT is specifically tailored for the clinical domain, leveraging a substantial dataset comprising 82 billion words of de-identified clinical text from University of Florida Health and 195 billion words of diverse English text from the Pile dataset.

Development and Training Methodology

This research utilized GPT-3 architecture with configurations containing 20 billion and 5 billion parameters to train GatorTronGPT from scratch, focusing on transfer, few-shot, and zero-shot learning capabilities. The training process employed robust computing resources, including 560 A100 GPUs configured in a supercomputing cluster environment, which highlights the computational demands of such extensive model training tasks.

Comparative Evaluation and Results

GatorTronGPT demonstrated superiority over pre-existing transformer models shown in its performance across several NLP benchmarks. The model achieved improved F1-scores across biomedical relation extraction tasks—specifically drug-drug interactions, chemical-disease relations, and drug-target interactions. It also exhibited notable accuracy improvements in question-answering tasks, aligning closely with or surpassing other high-performing models like BioLinkBERT in specific datasets such as MedQA and PubMedQA. The paper denotes a consistent performance enhancement with scale increments of GatorTronGPT’s parameters, corroborating the benefits of larger LLMs in achieving state-of-the-art results.

Synthetic Text Generation and Applications

One of the key implications of the paper is the utility of generated synthetic clinical text in training NLP models. GatorTronS models, trained using these generated texts, consistently outperformed counterparts trained with real-world clinical text on multiple benchmark datasets. This finding underscores the potential of synthetic text generation in bypassing privacy concerns associated with real clinical data while preserving model performance and reliability.

Turing Test and Human Evaluation

The paper reports findings from a Turing test evaluation where synthetic clinical texts generated by GatorTronGPT were virtually indistinguishable from human-authored notes in terms of readability and clinical relevance among physician evaluators. These observations imply that the model can potentially augment tasks within clinical documentation without compromising authenticity or quality. However, certain limitations such as the inherent lack of clinical logic adherence in generated text warrant further research focus.

Implications and Future Directions

The research elaborates on the prospects and challenges that generative LLMs present for the medical domain. While they are promising in performing various NLP tasks and generating clinically relevant content, the paper emphasizes ongoing challenges such as hallucinations and biases that are inherent to probabilistic text generation. Future studies are encouraged to focus on controlling these phenomena through reinforcement learning and feedback mechanisms to ensure safer and more practical applications within healthcare.

In conclusion, GatorTronGPT marks a significant step towards integrating generative LLMs into medical research, offering avenues for reducing documentation burdens and facilitating data-driven insights within healthcare systems. However, further advancements and extensive validation in clinical practice settings are essential to unlock its full potential. This paper sets a foundational basis for domain-specific AI applications, challenging researchers to keep innovating while ensuring ethical adherence in medical informatics development.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (19)
  1. Cheng Peng (177 papers)
  2. Xi Yang (160 papers)
  3. Aokun Chen (12 papers)
  4. Kaleb E Smith (14 papers)
  5. Nima PourNejatian (2 papers)
  6. Anthony B Costa (4 papers)
  7. Cheryl Martin (2 papers)
  8. Mona G Flores (6 papers)
  9. Ying Zhang (388 papers)
  10. Tanja Magoc (4 papers)
  11. Gloria Lipori (4 papers)
  12. Duane A Mitchell (2 papers)
  13. Naykky S Ospina (1 paper)
  14. Mustafa M Ahmed (1 paper)
  15. William R Hogan (8 papers)
  16. Elizabeth A Shenkman (4 papers)
  17. Yi Guo (115 papers)
  18. Jiang Bian (229 papers)
  19. Yonghui Wu (115 papers)
Citations (170)
Github Logo Streamline Icon: https://streamlinehq.com