Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Federated Learning with Privacy-Preserving Ensemble Attention Distillation (2210.08464v1)

Published 16 Oct 2022 in cs.LG, cs.AI, and cs.CR

Abstract: Federated Learning (FL) is a machine learning paradigm where many local nodes collaboratively train a central model while keeping the training data decentralized. This is particularly relevant for clinical applications since patient data are usually not allowed to be transferred out of medical facilities, leading to the need for FL. Existing FL methods typically share model parameters or employ co-distillation to address the issue of unbalanced data distribution. However, they also require numerous rounds of synchronized communication and, more importantly, suffer from a privacy leakage risk. We propose a privacy-preserving FL framework leveraging unlabeled public data for one-way offline knowledge distillation in this work. The central model is learned from local knowledge via ensemble attention distillation. Our technique uses decentralized and heterogeneous local data like existing FL approaches, but more importantly, it significantly reduces the risk of privacy leakage. We demonstrate that our method achieves very competitive performance with more robust privacy preservation based on extensive experiments on image classification, segmentation, and reconstruction tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Xuan Gong (16 papers)
  2. Liangchen Song (20 papers)
  3. Rishi Vedula (1 paper)
  4. Abhishek Sharma (112 papers)
  5. Meng Zheng (44 papers)
  6. Benjamin Planche (34 papers)
  7. Arun Innanje (2 papers)
  8. Terrence Chen (71 papers)
  9. Junsong Yuan (92 papers)
  10. David Doermann (54 papers)
  11. Ziyan Wu (59 papers)
Citations (19)

Summary

We haven't generated a summary for this paper yet.