Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Resource-Efficient and Delay-Aware Federated Learning Design under Edge Heterogeneity (2112.13926v3)

Published 27 Dec 2021 in cs.NI and cs.LG

Abstract: Federated learning (FL) has emerged as a popular technique for distributing machine learning across wireless edge devices. We examine FL under two salient properties of contemporary networks: device-server communication delays and device computation heterogeneity. Our proposed StoFedDelAv algorithm incorporates a local-global model combiner into the FL synchronization step. We theoretically characterize the convergence behavior of StoFedDelAv and obtain the optimal combiner weights, which consider the global model delay and expected local gradient error at each device. We then formulate a network-aware optimization problem which tunes the minibatch sizes of the devices to jointly minimize energy consumption and machine learning training loss, and solve the non-convex problem through a series of convex approximations. Our simulations reveal that StoFedDelAv outperforms the current art in FL, evidenced by the obtained improvements in optimization objective.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. David Nickel (1 paper)
  2. Frank Po-Chen Lin (7 papers)
  3. Seyyedali Hosseinalipour (83 papers)
  4. Christopher G. Brinton (109 papers)
  5. Nicolo Michelusi (35 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.