Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Delay Minimization for Federated Learning Over Wireless Communication Networks (2007.03462v1)

Published 5 Jul 2020 in eess.SP and cs.LG

Abstract: In this paper, the problem of delay minimization for federated learning (FL) over wireless communication networks is investigated. In the considered model, each user exploits limited local computational resources to train a local FL model with its collected data and, then, sends the trained FL model parameters to a base station (BS) which aggregates the local FL models and broadcasts the aggregated FL model back to all the users. Since FL involves learning model exchanges between the users and the BS, both computation and communication latencies are determined by the required learning accuracy level, which affects the convergence rate of the FL algorithm. This joint learning and communication problem is formulated as a delay minimization problem, where it is proved that the objective function is a convex function of the learning accuracy. Then, a bisection search algorithm is proposed to obtain the optimal solution. Simulation results show that the proposed algorithm can reduce delay by up to 27.3% compared to conventional FL methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Zhaohui Yang (193 papers)
  2. Mingzhe Chen (110 papers)
  3. Walid Saad (378 papers)
  4. Choong Seon Hong (165 papers)
  5. Mohammad Shikh-Bahaei (53 papers)
  6. H. Vincent Poor (884 papers)
  7. Shuguang Cui (275 papers)
Citations (52)