Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

To Talk or to Work: Delay Efficient Federated Learning over Mobile Edge Devices (2111.00637v1)

Published 1 Nov 2021 in cs.LG and cs.DC

Abstract: Federated learning (FL), an emerging distributed machine learning paradigm, in conflux with edge computing is a promising area with novel applications over mobile edge devices. In FL, since mobile devices collaborate to train a model based on their own data under the coordination of a central server by sharing just the model updates, training data is maintained private. However, without the central availability of data, computing nodes need to communicate the model updates often to attain convergence. Hence, the local computation time to create local model updates along with the time taken for transmitting them to and from the server result in a delay in the overall time. Furthermore, unreliable network connections may obstruct an efficient communication of these updates. To address these, in this paper, we propose a delay-efficient FL mechanism that reduces the overall time (consisting of both the computation and communication latencies) and communication rounds required for the model to converge. Exploring the impact of various parameters contributing to delay, we seek to balance the trade-off between wireless communication (to talk) and local computation (to work). We formulate a relation with overall time as an optimization problem and demonstrate the efficacy of our approach through extensive simulations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Pavana Prakash (4 papers)
  2. Jiahao Ding (10 papers)
  3. Maoqiang Wu (2 papers)
  4. Minglei Shu (10 papers)
  5. Rong Yu (141 papers)
  6. Miao Pan (42 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.