Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FedHM: Efficient Federated Learning for Heterogeneous Models via Low-rank Factorization (2111.14655v2)

Published 29 Nov 2021 in cs.LG, cs.AI, and cs.DC

Abstract: One underlying assumption of recent federated learning (FL) paradigms is that all local models usually share the same network architecture and size, which becomes impractical for devices with different hardware resources. A scalable federated learning framework should address the heterogeneity that clients have different computing capacities and communication capabilities. To this end, this paper proposes FedHM, a novel heterogeneous federated model compression framework, distributing the heterogeneous low-rank models to clients and then aggregating them into a full-rank model. Our solution enables the training of heterogeneous models with varying computational complexities and aggregates them into a single global model. Furthermore, FedHM significantly reduces the communication cost by using low-rank models. Extensive experimental results demonstrate that FedHM is superior in the performance and robustness of models of different sizes, compared with state-of-the-art heterogeneous FL methods under various FL settings. Additionally, the convergence guarantee of FL for heterogeneous devices is first theoretically analyzed.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Dezhong Yao (36 papers)
  2. Wanning Pan (2 papers)
  3. Michael J O'Neill (2 papers)
  4. Yutong Dai (21 papers)
  5. Yao Wan (70 papers)
  6. Hai Jin (83 papers)
  7. Lichao Sun (186 papers)
Citations (41)