Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Counterfactual Memorization in Neural Language Models (2112.12938v2)

Published 24 Dec 2021 in cs.CL, cs.AI, and cs.LG

Abstract: Modern neural LLMs that are widely used in various NLP tasks risk memorizing sensitive information from their training data. Understanding this memorization is important in real world applications and also from a learning-theoretical perspective. An open question in previous studies of LLM memorization is how to filter out "common" memorization. In fact, most memorization criteria strongly correlate with the number of occurrences in the training set, capturing memorized familiar phrases, public knowledge, templated texts, or other repeated data. We formulate a notion of counterfactual memorization which characterizes how a model's predictions change if a particular document is omitted during training. We identify and study counterfactually-memorized training examples in standard text datasets. We estimate the influence of each memorized training example on the validation set and on generated texts, showing how this can provide direct evidence of the source of memorization at test time.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Chiyuan Zhang (57 papers)
  2. Daphne Ippolito (47 papers)
  3. Katherine Lee (34 papers)
  4. Matthew Jagielski (51 papers)
  5. Florian Tramèr (87 papers)
  6. Nicholas Carlini (101 papers)
Citations (117)

Summary

We haven't generated a summary for this paper yet.