Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Utility Preserving Secure Private Data Release (1901.09858v3)

Published 28 Jan 2019 in cs.DS and cs.CR

Abstract: Differential privacy mechanisms that also make reconstruction of the data impossible come at a cost - a decrease in utility. In this paper, we tackle this problem by designing a private data release mechanism that makes reconstruction of the original data impossible and also preserves utility for a wide range of machine learning algorithms. We do so by combining the Johnson-Lindenstrauss (JL) transform with noise generated from a Laplace distribution. While the JL transform can itself provide privacy guarantees \cite{blocki2012johnson} and make reconstruction impossible, we do not rely on its differential privacy properties and only utilize its ability to make reconstruction impossible. We present novel proofs to show that our mechanism is differentially private under single element changes as well as single row changes to any database. In order to show utility, we prove that our mechanism maintains pairwise distances between points in expectation and also show that its variance is proportional to the dimensionality of the subspace we project the data into. Finally, we experimentally show the utility of our mechanism by deploying it on the task of clustering.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Jasjeet Dhaliwal (4 papers)
  2. Geoffrey So (1 paper)
  3. Aleatha Parker-Wood (1 paper)
  4. Melanie Beck (10 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.