Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Planting and Mitigating Memorized Content in Predictive-Text Language Models (2212.08619v1)

Published 16 Dec 2022 in cs.CL and cs.CR

Abstract: LLMs are widely deployed to provide automatic text completion services in user products. However, recent research has revealed that LLMs (especially large ones) bear considerable risk of memorizing private training data, which is then vulnerable to leakage and extraction by adversaries. In this study, we test the efficacy of a range of privacy-preserving techniques to mitigate unintended memorization of sensitive user text, while varying other factors such as model size and adversarial conditions. We test both "heuristic" mitigations (those without formal privacy guarantees) and Differentially Private training, which provides provable levels of privacy at the cost of some model performance. Our experiments show that (with the exception of L2 regularization), heuristic mitigations are largely ineffective in preventing memorization in our test suite, possibly because they make too strong of assumptions about the characteristics that define "sensitive" or "private" text. In contrast, Differential Privacy reliably prevents memorization in our experiments, despite its computational and model-performance costs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. C. M. Downey (6 papers)
  2. Wei Dai (230 papers)
  3. Huseyin A. Inan (23 papers)
  4. Kim Laine (13 papers)
  5. Saurabh Naik (3 papers)
  6. Tomasz Religa (1 paper)
Citations (2)