Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adaptive Expert Models for Personalization in Federated Learning (2206.07832v1)

Published 15 Jun 2022 in cs.LG

Abstract: Federated Learning (FL) is a promising framework for distributed learning when data is private and sensitive. However, the state-of-the-art solutions in this framework are not optimal when data is heterogeneous and non-Independent and Identically Distributed (non-IID). We propose a practical and robust approach to personalization in FL that adjusts to heterogeneous and non-IID data by balancing exploration and exploitation of several global models. To achieve our aim of personalization, we use a Mixture of Experts (MoE) that learns to group clients that are similar to each other, while using the global models more efficiently. We show that our approach achieves an accuracy up to 29.78 % and up to 4.38 % better compared to a local model in a pathological non-IID setting, even though we tune our approach in the IID setting.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Martin Isaksson (3 papers)
  2. Edvin Listo Zec (14 papers)
  3. Rickard Cöster (2 papers)
  4. Daniel Gillblad (5 papers)
  5. Šarūnas Girdzijauskas (2 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.