Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DP-LSSGD: A Stochastic Optimization Method to Lift the Utility in Privacy-Preserving ERM (1906.12056v2)

Published 28 Jun 2019 in cs.LG, cs.CR, and stat.ML

Abstract: Machine learning (ML) models trained by differentially private stochastic gradient descent (DP-SGD) have much lower utility than the non-private ones. To mitigate this degradation, we propose a DP Laplacian smoothing SGD (DP-LSSGD) to train ML models with differential privacy (DP) guarantees. At the core of DP-LSSGD is the Laplacian smoothing, which smooths out the Gaussian noise used in the Gaussian mechanism. Under the same amount of noise used in the Gaussian mechanism, DP-LSSGD attains the same DP guarantee, but in practice, DP-LSSGD makes training both convex and nonconvex ML models more stable and enables the trained models to generalize better. The proposed algorithm is simple to implement and the extra computational complexity and memory overhead compared with DP-SGD are negligible. DP-LSSGD is applicable to train a large variety of ML models, including DNNs. The code is available at \url{https://github.com/BaoWangMath/DP-LSSGD}.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Bao Wang (70 papers)
  2. Quanquan Gu (198 papers)
  3. March Boedihardjo (15 papers)
  4. Farzin Barekat (9 papers)
  5. Stanley J. Osher (39 papers)
Citations (24)

Summary

We haven't generated a summary for this paper yet.