Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Training Data Leakage Analysis in Language Models (2101.05405v2)

Published 14 Jan 2021 in cs.CR, cs.CL, and cs.LG

Abstract: Recent advances in neural network based LLMs lead to successful deployments of such models, improving user experience in various applications. It has been demonstrated that strong performance of LLMs comes along with the ability to memorize rare training samples, which poses serious privacy threats in case the model is trained on confidential user content. In this work, we introduce a methodology that investigates identifying the user content in the training data that could be leaked under a strong and realistic threat model. We propose two metrics to quantify user-level data leakage by measuring a model's ability to produce unique sentence fragments within training data. Our metrics further enable comparing different models trained on the same data in terms of privacy. We demonstrate our approach through extensive numerical studies on both RNN and Transformer based models. We further illustrate how the proposed metrics can be utilized to investigate the efficacy of mitigations like differentially private training or API hardening.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Huseyin A. Inan (23 papers)
  2. Osman Ramadan (5 papers)
  3. Lukas Wutschitz (13 papers)
  4. Daniel Jones (7 papers)
  5. Victor Rühle (18 papers)
  6. James Withers (3 papers)
  7. Robert Sim (25 papers)
Citations (9)