Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy (2210.17546v3)

Published 31 Oct 2022 in cs.LG and cs.CL

Abstract: Studying data memorization in neural LLMs helps us understand the risks (e.g., to privacy or copyright) associated with models regurgitating training data and aids in the development of countermeasures. Many prior works -- and some recently deployed defenses -- focus on "verbatim memorization", defined as a model generation that exactly matches a substring from the training set. We argue that verbatim memorization definitions are too restrictive and fail to capture more subtle forms of memorization. Specifically, we design and implement an efficient defense that perfectly prevents all verbatim memorization. And yet, we demonstrate that this "perfect" filter does not prevent the leakage of training data. Indeed, it is easily circumvented by plausible and minimally modified "style-transfer" prompts -- and in some cases even the non-modified original prompts -- to extract memorized information. We conclude by discussing potential alternative definitions and why defining memorization is a difficult yet crucial open question for neural LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Daphne Ippolito (47 papers)
  2. Florian Tramèr (87 papers)
  3. Milad Nasr (48 papers)
  4. Chiyuan Zhang (57 papers)
  5. Matthew Jagielski (51 papers)
  6. Katherine Lee (34 papers)
  7. Christopher A. Choquette-Choo (49 papers)
  8. Nicholas Carlini (101 papers)
Citations (42)

Summary

We haven't generated a summary for this paper yet.