Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Differentially Private Models with Active Learning (1910.01177v1)

Published 2 Oct 2019 in stat.ML and cs.LG

Abstract: Broad adoption of machine learning techniques has increased privacy concerns for models trained on sensitive data such as medical records. Existing techniques for training differentially private (DP) models give rigorous privacy guarantees, but applying these techniques to neural networks can severely degrade model performance. This performance reduction is an obstacle to deploying private models in the real world. In this work, we improve the performance of DP models by fine-tuning them through active learning on public data. We introduce two new techniques - DIVERSEPUBLIC and NEARPRIVATE - for doing this fine-tuning in a privacy-aware way. For the MNIST and SVHN datasets, these techniques improve state-of-the-art accuracy for DP models while retaining privacy guarantees.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Zhengli Zhao (9 papers)
  2. Nicolas Papernot (123 papers)
  3. Sameer Singh (96 papers)
  4. Neoklis Polyzotis (14 papers)
  5. Augustus Odena (22 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.