Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Privacy Onion Effect: Memorization is Relative (2206.10469v2)

Published 21 Jun 2022 in cs.LG and cs.CR

Abstract: Machine learning models trained on private datasets have been shown to leak their private data. While recent work has found that the average data point is rarely leaked, the outlier samples are frequently subject to memorization and, consequently, privacy leakage. We demonstrate and analyse an Onion Effect of memorization: removing the "layer" of outlier points that are most vulnerable to a privacy attack exposes a new layer of previously-safe points to the same attack. We perform several experiments to study this effect, and understand why it occurs. The existence of this effect has various consequences. For example, it suggests that proposals to defend against memorization without training with rigorous privacy guarantees are unlikely to be effective. Further, it suggests that privacy-enhancing technologies such as machine unlearning could actually harm the privacy of other users.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Nicholas Carlini (101 papers)
  2. Matthew Jagielski (51 papers)
  3. Chiyuan Zhang (57 papers)
  4. Nicolas Papernot (123 papers)
  5. Andreas Terzis (23 papers)
  6. Florian Tramer (19 papers)
Citations (81)

Summary

We haven't generated a summary for this paper yet.