Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Preserving Privacy in Federated Learning with Ensemble Cross-Domain Knowledge Distillation (2209.04599v1)

Published 10 Sep 2022 in cs.CR, cs.CV, and cs.LG

Abstract: Federated Learning (FL) is a machine learning paradigm where local nodes collaboratively train a central model while the training data remains decentralized. Existing FL methods typically share model parameters or employ co-distillation to address the issue of unbalanced data distribution. However, they suffer from communication bottlenecks. More importantly, they risk privacy leakage. In this work, we develop a privacy preserving and communication efficient method in a FL framework with one-shot offline knowledge distillation using unlabeled, cross-domain public data. We propose a quantized and noisy ensemble of local predictions from completely trained local models for stronger privacy guarantees without sacrificing accuracy. Based on extensive experiments on image classification and text classification tasks, we show that our privacy-preserving method outperforms baseline FL algorithms with superior performance in both accuracy and communication efficiency.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Xuan Gong (16 papers)
  2. Abhishek Sharma (112 papers)
  3. Srikrishna Karanam (38 papers)
  4. Ziyan Wu (59 papers)
  5. Terrence Chen (71 papers)
  6. David Doermann (54 papers)
  7. Arun Innanje (2 papers)
Citations (57)

Summary

We haven't generated a summary for this paper yet.