Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DFRD: Data-Free Robustness Distillation for Heterogeneous Federated Learning (2309.13546v2)

Published 24 Sep 2023 in cs.CV and cs.LG

Abstract: Federated Learning (FL) is a privacy-constrained decentralized machine learning paradigm in which clients enable collaborative training without compromising private data. However, how to learn a robust global model in the data-heterogeneous and model-heterogeneous FL scenarios is challenging. To address it, we resort to data-free knowledge distillation to propose a new FL method (namely DFRD). DFRD equips a conditional generator on the server to approximate the training space of the local models uploaded by clients, and systematically investigates its training in terms of fidelity, transferability} and diversity. To overcome the catastrophic forgetting of the global model caused by the distribution shifts of the generator across communication rounds, we maintain an exponential moving average copy of the generator on the server. Additionally, we propose dynamic weighting and label sampling to accurately extract knowledge from local models. Finally, our extensive experiments on various image classification tasks illustrate that DFRD achieves significant performance gains compared to SOTA baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Kangyang Luo (16 papers)
  2. Shuai Wang (466 papers)
  3. Yexuan Fu (3 papers)
  4. Xiang Li (1003 papers)
  5. Yunshi Lan (30 papers)
  6. Ming Gao (95 papers)
Citations (14)

Summary

We haven't generated a summary for this paper yet.