Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Does BERT Pretrained on Clinical Notes Reveal Sensitive Data? (2104.07762v2)

Published 15 Apr 2021 in cs.CL, cs.AI, and cs.LG

Abstract: Large Transformers pretrained over clinical notes from Electronic Health Records (EHR) have afforded substantial gains in performance on predictive clinical tasks. The cost of training such models (and the necessity of data access to do so) coupled with their utility motivates parameter sharing, i.e., the release of pretrained models such as ClinicalBERT. While most efforts have used deidentified EHR, many researchers have access to large sets of sensitive, non-deidentified EHR with which they might train a BERT model (or similar). Would it be safe to release the weights of such a model if they did? In this work, we design a battery of approaches intended to recover Personal Health Information (PHI) from a trained BERT. Specifically, we attempt to recover patient names and conditions with which they are associated. We find that simple probing methods are not able to meaningfully extract sensitive information from BERT trained over the MIMIC-III corpus of EHR. However, more sophisticated "attacks" may succeed in doing so: To facilitate such research, we make our experimental setup and baseline probing models available at https://github.com/elehman16/exposing_patient_data_release

An Examination of Privacy Concerns in Pretrained Clinical Models: Does BERT Leak Sensitive Data?

The paper "Does BERT Pretrained on Clinical Notes Reveal Sensitive Data?" addresses a critical issue in the deployment of LLMs in biomedical contexts: the potential leakage of sensitive information inadvertently stored in model parameters. As large BERT models are increasingly used for tasks involving Electronic Health Records (EHR), understanding these privacy risks is paramount for both researchers and practitioners in the field.

Context and Motivation

Pretraining LLMs such as BERT on domain-specific data, including clinical notes from EHRs, can significantly enhance performance in predictive clinical tasks. However, releasing such pretrained models poses a potential privacy risk, specifically in revealing Personal Health Information (PHI) embedded within the trained model parameters. This risk is particularly acute under regulations like HIPAA, which prohibits the sharing of identified health data. The paper probes the extent to which sensitive information, especially patient identifiers such as names and associated medical conditions, can be extracted from a model like BERT trained on clinical datasets.

Methodological Approach

The research employs a comprehensive experimental framework to assess whether BERT pretrained on clinical notes can reveal sensitive data. The authors utilize the MIMIC-III dataset, a deidentified collection of clinical notes, reintroducing synthetic identifiers to simulate real-world risky conditions. The investigation is structured around several extraction techniques:

  1. Probing and Template-Based Methods: These methods involve using fixed templates that fill in missing pieces of information (e.g., patient names) to ascertain BERT's retention of associations between patients and conditions.
  2. Model Probing: A classifier (MLP probe) is trained to predict associations between patient names and their conditions, leveraging the contextual embeddings generated by BERT.
  3. Cosine Similarity Analysis: This approach assesses whether embeddings of patient names are closer to their associated conditions, as opposed to random conditions, within the model's learned vector space.
  4. Text Generation: Inspired by techniques developed for GPT-2 extraction, text is sampled from BERT to identify potential memorized pieces of the training data, comparing likelihood scores under different models.

Results and Observations

The empirical findings suggest that simple probing and masking methods do not effectively recover sensitive PHI at rates beyond chance. Even advanced attacks, such as the use of text generation for potential data leaks, show mixed results with contextualized embeddings like BERT. The trained models, even when forced to over-represent names in the dataset, do not seem to encode sensitive information in a manner readily accessible through straightforward attacks.

Additionally, the paper reveals that probing using more conventional methods tends to replicate general statistical information (such as condition frequencies) rather than personal identifiers. There is also evidence of model parameter sharing caution when pretraining involves sensitive datasets.

Implications and Future Work

This research contributes to the ongoing dialogue on model privacy, particularly in sensitive domains like healthcare. It provides cautious optimism that standard training regimes and naive extraction strategies may not pose a significant privacy threat. However, the authors acknowledge the existence of more sophisticated extraction approaches that could potentially penetrate model defenses.

Future work in this domain could extend to other architectural variants, such as auto-regressive models or larger Transformer frameworks like GPT-3, which may exhibit different information retention characteristics. Moreover, the ethical considerations presented by the paper underscore the need for continued vigilance in data governance and model release policies to ensure patient privacy in clinical NLP applications. The authors propose making experimental resources available to foster further investigation, which could lead to improved privacy-preserving strategies in the deployment of LLMs.

In conclusion, this paper is a critical step in understanding the privacy implications of LLMs in healthcare, promoting a cautious approach to model sharing and emphasizing the need for robust privacy-preserving mechanisms moving forward.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Eric Lehman (9 papers)
  2. Sarthak Jain (33 papers)
  3. Karl Pichotta (5 papers)
  4. Yoav Goldberg (142 papers)
  5. Byron C. Wallace (82 papers)
Citations (112)
Github Logo Streamline Icon: https://streamlinehq.com