Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Fairer and More Efficient Federated Learning via Multidimensional Personalized Edge Models (2302.04464v2)

Published 9 Feb 2023 in cs.LG, cs.AI, and cs.DC

Abstract: Federated learning (FL) is an emerging technique that trains massive and geographically distributed edge data while maintaining privacy. However, FL has inherent challenges in terms of fairness and computational efficiency due to the rising heterogeneity of edges, and thus usually results in sub-optimal performance in recent state-of-the-art (SOTA) solutions. In this paper, we propose a Customized Federated Learning (CFL) system to eliminate FL heterogeneity from multiple dimensions. Specifically, CFL tailors personalized models from the specially designed global model for each client jointly guided by an online trained model-search helper and a novel aggregation algorithm. Extensive experiments demonstrate that CFL has full-stack advantages for both FL training and edge reasoning and significantly improves the SOTA performance w.r.t. model accuracy (up to 7.2% in the non-heterogeneous environment and up to 21.8% in the heterogeneous environment), efficiency, and FL fairness.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yingchun Wang (24 papers)
  2. Jingcai Guo (48 papers)
  3. Jie Zhang (847 papers)
  4. Song Guo (138 papers)
  5. Weizhan Zhang (17 papers)
  6. Qinghua Zheng (56 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.