Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Private Deep Learning with Teacher Ensembles (1906.02303v2)

Published 5 Jun 2019 in cs.CR and cs.LG

Abstract: Privacy-preserving deep learning is crucial for deploying deep neural network based solutions, especially when the model works on data that contains sensitive information. Most privacy-preserving methods lead to undesirable performance degradation. Ensemble learning is an effective way to improve model performance. In this work, we propose a new method for teacher ensembles that uses more informative network outputs under differential private stochastic gradient descent and provide provable privacy guarantees. Out method employs knowledge distillation and hint learning on intermediate representations to facilitate the training of student model. Additionally, we propose a simple weighted ensemble scheme that works more robustly across different teaching settings. Experimental results on three common image datasets benchmark (i.e., CIFAR10, MINST, and SVHN) demonstrate that our approach outperforms previous state-of-the-art methods on both performance and privacy-budget.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Lichao Sun (186 papers)
  2. Yingbo Zhou (81 papers)
  3. Ji Wang (210 papers)
  4. Jia Li (380 papers)
  5. Richard Sochar (1 paper)
  6. Philip S. Yu (592 papers)
  7. Caiming Xiong (337 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.