Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DPlis: Boosting Utility of Differentially Private Deep Learning via Randomized Smoothing (2103.01496v2)

Published 2 Mar 2021 in cs.LG, cs.CR, and stat.ML

Abstract: Deep learning techniques have achieved remarkable performance in wide-ranging tasks. However, when trained on privacy-sensitive datasets, the model parameters may expose private information in training data. Prior attempts for differentially private training, although offering rigorous privacy guarantees, lead to much lower model performance than the non-private ones. Besides, different runs of the same training algorithm produce models with large performance variance. To address these issues, we propose DPlis--Differentially Private Learning wIth Smoothing. The core idea of DPlis is to construct a smooth loss function that favors noise-resilient models lying in large flat regions of the loss landscape. We provide theoretical justification for the utility improvements of DPlis. Extensive experiments also demonstrate that DPlis can effectively boost model quality and training stability under a given privacy budget.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Wenxiao Wang (63 papers)
  2. Tianhao Wang (98 papers)
  3. Lun Wang (33 papers)
  4. Nanqing Luo (7 papers)
  5. Pan Zhou (220 papers)
  6. Dawn Song (229 papers)
  7. Ruoxi Jia (88 papers)
Citations (16)

Summary

We haven't generated a summary for this paper yet.