Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Data-Copying in Generative Models: A Formal Framework (2302.13181v2)

Published 25 Feb 2023 in cs.LG

Abstract: There has been some recent interest in detecting and addressing memorization of training data by deep neural networks. A formal framework for memorization in generative models, called "data-copying," was proposed by Meehan et. al. (2020). We build upon their work to show that their framework may fail to detect certain kinds of blatant memorization. Motivated by this and the theory of non-parametric methods, we provide an alternative definition of data-copying that applies more locally. We provide a method to detect data-copying, and provably show that it works with high probability when enough data is available. We also provide lower bounds that characterize the sample requirement for reliable detection.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Robi Bhattacharjee (16 papers)
  2. Sanjoy Dasgupta (41 papers)
  3. Kamalika Chaudhuri (122 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.