Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GatorTron: A Large Clinical Language Model to Unlock Patient Information from Unstructured Electronic Health Records (2203.03540v3)

Published 2 Feb 2022 in cs.CL, cs.AI, and cs.LG
GatorTron: A Large Clinical Language Model to Unlock Patient Information from Unstructured Electronic Health Records

Abstract: There is an increasing interest in developing AI systems to process and interpret electronic health records (EHRs). Natural language processing (NLP) powered by pretrained LLMs is the key technology for medical AI systems utilizing clinical narratives. However, there are few clinical LLMs, the largest of which trained in the clinical domain is comparatively small at 110 million parameters (compared with billions of parameters in the general domain). It is not clear how large clinical LLMs with billions of parameters can help medical AI systems utilize unstructured EHRs. In this study, we develop from scratch a large clinical LLM - GatorTron - using >90 billion words of text (including >82 billion words of de-identified clinical text) and systematically evaluate it on 5 clinical NLP tasks including clinical concept extraction, medical relation extraction, semantic textual similarity, natural language inference (NLI), and medical question answering (MQA). We examine how (1) scaling up the number of parameters and (2) scaling up the size of the training data could benefit these NLP tasks. GatorTron models scale up the clinical LLM from 110 million to 8.9 billion parameters and improve 5 clinical NLP tasks (e.g., 9.6% and 9.5% improvement in accuracy for NLI and MQA), which can be applied to medical AI systems to improve healthcare delivery. The GatorTron models are publicly available at: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/clara/models/gatortron_og.

A LLM for Electronic Health Records: GatorTron

The research paper focuses on the development of a cutting-edge LLM specifically designed for electronic health records (EHRs) called GatorTron. This model is a significant advancement in the field of NLP for clinical text, and it aims to leverage the vast amounts of unstructured data available in EHRs to enhance healthcare delivery and outcomes.

Model Development and Architecture

GatorTron is built on transformer architecture, a state-of-the-art framework for NLP tasks known for its effectiveness in managing complex language structures through mechanisms like self-attention. The research introduces multiple configurations of GatorTron, varying by parameter size to evaluate the impacts of scaling: a base model with 345 million parameters, a medium model with 3.9 billion parameters, and a large model with 8.9 billion parameters. The model is trained on a comprehensive corpus of over 90 billion words, including more than 82 billion words from de-identified clinical notes at UF Health, supplemented with texts from PubMed and Wikipedia.

Evaluation Across Clinical NLP Tasks

GatorTron is evaluated on five core clinical NLP tasks: clinical concept extraction, medical relation extraction, semantic textual similarity, natural language inference (NLI), and medical question answering (MQA). These tasks are crucial for interpreting EHRs, which are predominantly composed of unstructured narrative data. The empirical results indicate that GatorTron outperforms existing biomedical and clinical transformers like BioBERT, ClinicalBERT, and BioMegatron across all evaluated tasks. Notably, GatorTron's performance in NLI and MQA tasks, which are inherently complex, shows remarkable accuracy improvements of 9.6% and 9.5%, respectively.

Implications and Future Directions

The implications of this paper are far-reaching for medical AI systems. By significantly improving the extraction and interpretation of clinical narrative data, GatorTron can enhance clinical decision support systems, improve patient cohort identification, and support pharmacovigilance efforts. The robustness of large transformer models like GatorTron in complex NLP tasks suggests potential for ongoing advancements in medical AI applications.

Future work will likely focus on optimizing GatorTron to handle even longer pieces of text, a crucial factor for improving outcomes in NLI and MQA scenarios. Furthermore, given that larger models tend to converge faster and perform better, researchers might explore even larger configurations or hybrid models that integrate additional domain-specific data.

Conclusion

The development of GatorTron marks an important step in clinical NLP, emphasizing the benefits of scaling both parameter size and data volume for transformer models. By addressing the unique challenges posed by clinical narrative data, GatorTron enhances the ability of AI systems to make meaningful contributions to healthcare delivery and patient outcomes. This research underscores the potential of LLMs in transforming EHR data into actionable clinical insights. As this field continues to evolve, GatorTron provides a foundation for future innovations in medical AI.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (18)
  1. Xi Yang (160 papers)
  2. Aokun Chen (12 papers)
  3. Nima PourNejatian (2 papers)
  4. Hoo Chang Shin (1 paper)
  5. Kaleb E Smith (14 papers)
  6. Christopher Parisien (12 papers)
  7. Colin Compas (2 papers)
  8. Cheryl Martin (2 papers)
  9. Mona G Flores (6 papers)
  10. Ying Zhang (388 papers)
  11. Tanja Magoc (4 papers)
  12. Christopher A Harle (1 paper)
  13. Gloria Lipori (4 papers)
  14. Duane A Mitchell (2 papers)
  15. William R Hogan (8 papers)
  16. Elizabeth A Shenkman (4 papers)
  17. Jiang Bian (229 papers)
  18. Yonghui Wu (115 papers)
Citations (410)