Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Federated Mutual Learning (2006.16765v3)

Published 27 Jun 2020 in cs.LG

Abstract: Federated learning (FL) enables collaboratively training deep learning models on decentralized data. However, there are three types of heterogeneities in FL setting bringing about distinctive challenges to the canonical federated learning algorithm (FedAvg). First, due to the Non-IIDness of data, the global shared model may perform worse than local models that solely trained on their private data; Second, the objective of center server and clients may be different, where center server seeks for a generalized model whereas client pursue a personalized model, and clients may run different tasks; Third, clients may need to design their customized model for various scenes and tasks; In this work, we present a novel federated learning paradigm, named Federated Mutual Leaning (FML), dealing with the three heterogeneities. FML allows clients training a generalized model collaboratively and a personalized model independently, and designing their private customized models. Thus, the Non-IIDness of data is no longer a bug but a feature that clients can be personally served better. The experiments show that FML can achieve better performance than alternatives in typical FL setting, and clients can be benefited from FML with different models and tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Tao Shen (87 papers)
  2. Jie Zhang (847 papers)
  3. Xinkang Jia (1 paper)
  4. Fengda Zhang (11 papers)
  5. Gang Huang (86 papers)
  6. Pan Zhou (220 papers)
  7. Kun Kuang (114 papers)
  8. Fei Wu (317 papers)
  9. Chao Wu (137 papers)
Citations (102)

Summary

We haven't generated a summary for this paper yet.