Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-Center Federated Learning: Clients Clustering for Better Personalization (2005.01026v3)

Published 3 May 2020 in cs.LG, cs.DC, and stat.ML

Abstract: Federated learning has received great attention for its capability to train a large-scale model in a decentralized manner without needing to access user data directly. It helps protect the users' private data from centralized collecting. Unlike distributed machine learning, federated learning aims to tackle non-IID data from heterogeneous sources in various real-world applications, such as those on smartphones. Existing federated learning approaches usually adopt a single global model to capture the shared knowledge of all users by aggregating their gradients, regardless of the discrepancy between their data distributions. However, due to the diverse nature of user behaviors, assigning users' gradients to different global models (i.e., centers) can better capture the heterogeneity of data distributions across users. Our paper proposes a novel multi-center aggregation mechanism for federated learning, which learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers. We formulate the problem as a joint optimization that can be efficiently solved by a stochastic expectation maximization (EM) algorithm. Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Guodong Long (115 papers)
  2. Ming Xie (41 papers)
  3. Tao Shen (87 papers)
  4. Tianyi Zhou (172 papers)
  5. Xianzhi Wang (49 papers)
  6. Jing Jiang (192 papers)
  7. Chengqi Zhang (74 papers)

Summary

We haven't generated a summary for this paper yet.