Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Publicly Available Clinical BERT Embeddings (1904.03323v3)

Published 6 Apr 2019 in cs.CL
Publicly Available Clinical BERT Embeddings

Abstract: Contextual word embedding models such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018) have dramatically improved performance for many NLP tasks in recent months. However, these models have been minimally explored on specialty corpora, such as clinical text; moreover, in the clinical domain, no publicly-available pre-trained BERT models yet exist. In this work, we address this need by exploring and releasing BERT models for clinical text: one for generic clinical text and another for discharge summaries specifically. We demonstrate that using a domain-specific model yields performance improvements on three common clinical NLP tasks as compared to nonspecific embeddings. These domain-specific models are not as performant on two clinical de-identification tasks, and argue that this is a natural consequence of the differences between de-identified source text and synthetically non de-identified task text.

Publicly Available Clinical BERT Embeddings

The paper "Publicly Available Clinical BERT Embeddings" by Alsentzer et al. presents an exploration and release of BERT models fine-tuned on clinical text, addressing the lack of publicly accessible pre-trained BERT models for clinical narratives. The authors aim to demonstrate improvements in performance on domain-specific tasks using these specialized models.

Introduction

Recent advances in NLP have been largely driven by the development of contextual word embedding models like ELMo, ULMFiT, and BERT. These models have predominantly been applied to general domain text, with some recent efforts extending to biomedical corpora. However, clinical narratives possess unique linguistic characteristics, necessitating the development of dedicated clinical BERT models. This paper responds to this need by training and releasing BERT models tailored to clinical texts and discharge summaries. The authors evaluate the performance of these domain-specific models on various clinical NLP tasks, comparing them to general BERT and BioBERT models.

Methods

The authors utilized the MIMIC-III v1.4 database, containing approximately 2 million clinical notes, to train two versions of BERT: Clinical BERT (trained on all clinical notes) and Discharge Summary BERT (trained solely on discharge summaries). Additionally, they fine-tuned the pre-existing BioBERT on clinical text, creating Clinical BioBERT and Bio+Discharge Summary BERT. The training procedure was standard, adopting the BERT-Base architecture and fine-tuning on domain-specific tasks without introducing novel technical procedures.

The paper evaluates these models across multiple clinical NLP tasks:

  • MedNLI: A medical natural language inference task.
  • i2b2 2006: De-identification task (exact F1 score).
  • i2b2 2010: Concept extraction (exact F1 score).
  • i2b2 2012: Entity extraction challenge (exact F1 score).
  • i2b2 2014: A second de-identification challenge (exact F1 score).

Results and Discussions

Quantitatively, domain-specific models showed improved performance on clinical NLP tasks, other than de-identification tasks. For instance, Clinical BioBERT demonstrated a new state-of-the-art performance with an accuracy of 82.7% in the MedNLI task, markedly outperforming previous models. Furthermore, Clinical BERT and Bio+Discharge Summary BERT exhibited superior accuracy and F1 scores across the i2b2 2010 and 2012 tasks.

Interestingly, the clinical-specific models did not outperform general BERT and BioBERT in the i2b2 2006 and 2014 de-identification tasks. The authors suggest that this disparity is due to the synthetically masked PHI in de-identification datasets contrasting with the naturally de-identified MIMIC notes. This syntactical and contextual incongruity disrupts the performance of contextual embedding models like BERT, which rely on stable sentence structures from training corpora.

Qualitative analyses provided further evidence of the domain-specific utility of Clinical BERT. Nearest neighbor evaluations for sentinel words demonstrated greater cohesion within the clinical context, highlighting the model's enhanced clinical relevance.

Implications and Future Work

The empirical results underline the efficacy of employing specialized embeddings for clinical NLP applications, offering improvements over general and biomedical-specific models in several tasks. This work significantly contributes to the domain by providing publicly available, pre-trained clinical BERT models, potentially reducing the significant computational resource requirements for labs and institutions.

Several limitations are acknowledged. First, the authors did not experiment with more sophisticated architectures beyond BERT, which may have constrained task performance. Second, the dataset was restricted to notes from a single institution (BIDMC), which might limit the generalizability of the model. Moreover, the failure to enhance de-identification task performance suggests the need for further research into task-aligned data preparation. Future directions could involve using synthetic de-identification during model training to improve compatibility with de-ID task datasets.

Conclusion

This work presents a valuable resource for the NLP community, particularly for researchers working with clinical text. By providing pre-trained, domain-specific BERT models, this paper facilitates the application of advanced NLP techniques to clinical narratives, potentially accelerating developments in clinical informatics and improving patient care outcomes. The models' success on multiple tasks underscores the importance of domain-specific adaptations in NLP model training.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Emily Alsentzer (13 papers)
  2. John R. Murphy (1 paper)
  3. Willie Boag (9 papers)
  4. Wei-Hung Weng (35 papers)
  5. Di Jin (104 papers)
  6. Tristan Naumann (41 papers)
  7. Matthew B. A. McDermott (22 papers)
Citations (1,777)