Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Device Scheduling and Update Aggregation Policies for Asynchronous Federated Learning (2107.11415v1)

Published 23 Jul 2021 in cs.LG, cs.DC, cs.IT, eess.SP, and math.IT

Abstract: Federated Learning (FL) is a newly emerged decentralized ML framework that combines on-device local training with server-based model synchronization to train a centralized ML model over distributed nodes. In this paper, we propose an asynchronous FL framework with periodic aggregation to eliminate the straggler issue in FL systems. For the proposed model, we investigate several device scheduling and update aggregation policies and compare their performances when the devices have heterogeneous computation capabilities and training data distributions. From the simulation results, we conclude that the scheduling and aggregation design for asynchronous FL can be rather different from the synchronous case. For example, a norm-based significance-aware scheduling policy might not be efficient in an asynchronous FL setting, and an appropriate "age-aware" weighting design for the model aggregation can greatly improve the learning performance of such systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Chung-Hsuan Hu (4 papers)
  2. Zheng Chen (221 papers)
  3. Erik G. Larsson (252 papers)
Citations (25)