Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PEARL: Data Synthesis via Private Embeddings and Adversarial Reconstruction Learning (2106.04590v2)

Published 8 Jun 2021 in cs.LG and cs.CR

Abstract: We propose a new framework of synthesizing data using deep generative models in a differentially private manner. Within our framework, sensitive data are sanitized with rigorous privacy guarantees in a one-shot fashion, such that training deep generative models is possible without re-using the original data. Hence, no extra privacy costs or model constraints are incurred, in contrast to popular approaches such as Differentially Private Stochastic Gradient Descent (DP-SGD), which, among other issues, causes degradation in privacy guarantees as the training iteration increases. We demonstrate a realization of our framework by making use of the characteristic function and an adversarial re-weighting objective, which are of independent interest as well. Our proposal has theoretical guarantees of performance, and empirical evaluations on multiple datasets show that our approach outperforms other methods at reasonable levels of privacy.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Seng Pei Liew (29 papers)
  2. Tsubasa Takahashi (20 papers)
  3. Michihiko Ueno (1 paper)
Citations (25)

Summary

We haven't generated a summary for this paper yet.