Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adaptive Differentially Private Empirical Risk Minimization (2110.07435v2)

Published 14 Oct 2021 in cs.LG, eess.IV, math.OC, and stat.ML

Abstract: We propose an adaptive (stochastic) gradient perturbation method for differentially private empirical risk minimization. At each iteration, the random noise added to the gradient is optimally adapted to the stepsize; we name this process adaptive differentially private (ADP) learning. Given the same privacy budget, we prove that the ADP method considerably improves the utility guarantee compared to the standard differentially private method in which vanilla random noise is added. Our method is particularly useful for gradient-based algorithms with time-varying learning rates, including variants of AdaGrad (Duchi et al., 2011). We provide extensive numerical experiments to demonstrate the effectiveness of the proposed adaptive differentially private algorithm.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Xiaoxia Wu (30 papers)
  2. Lingxiao Wang (74 papers)
  3. Irina Cristali (6 papers)
  4. Quanquan Gu (198 papers)
  5. Rebecca Willett (80 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.