Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Model Conversion via Differentially Private Data-Free Distillation (2304.12528v2)

Published 25 Apr 2023 in cs.CR

Abstract: While massive valuable deep models trained on large-scale data have been released to facilitate the artificial intelligence community, they may encounter attacks in deployment which leads to privacy leakage of training data. In this work, we propose a learning approach termed differentially private data-free distillation (DPDFD) for model conversion that can convert a pretrained model (teacher) into its privacy-preserving counterpart (student) via an intermediate generator without access to training data. The learning collaborates three parties in a unified way. First, massive synthetic data are generated with the generator. Then, they are fed into the teacher and student to compute differentially private gradients by normalizing the gradients and adding noise before performing descent. Finally, the student is updated with these differentially private gradients and the generator is updated by taking the student as a fixed discriminator in an alternate manner. In addition to a privacy-preserving student, the generator can generate synthetic data in a differentially private way for other downstream tasks. We theoretically prove that our approach can guarantee differential privacy and well convergence. Extensive experiments clearly demonstrate that our approach significantly outperform other differentially private generative approaches.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Bochao Liu (12 papers)
  2. Pengju Wang (19 papers)
  3. Shikun Li (12 papers)
  4. Dan Zeng (54 papers)
  5. Shiming Ge (47 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.