Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fast Convergence Algorithm for Analog Federated Learning (2011.06658v1)

Published 30 Oct 2020 in cs.IT, cs.LG, eess.SP, and math.IT

Abstract: In this paper, we consider federated learning (FL) over a noisy fading multiple access channel (MAC), where an edge server aggregates the local models transmitted by multiple end devices through over-the-air computation (AirComp). To realize efficient analog federated learning over wireless channels, we propose an AirComp-based FedSplit algorithm, where a threshold-based device selection scheme is adopted to achieve reliable local model uploading. In particular, we analyze the performance of the proposed algorithm and prove that the proposed algorithm linearly converges to the optimal solutions under the assumption that the objective function is strongly convex and smooth. We also characterize the robustness of proposed algorithm to the ill-conditioned problems, thereby achieving fast convergence rates and reducing communication rounds. A finite error bound is further provided to reveal the relationship between the convergence behavior and the channel fading and noise. Our algorithm is theoretically and experimentally verified to be much more robust to the ill-conditioned problems with faster convergence compared with other benchmark FL algorithms.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Shuhao Xia (5 papers)
  2. Jingyang Zhu (11 papers)
  3. Yuhan Yang (7 papers)
  4. Yong Zhou (156 papers)
  5. Yuanming Shi (119 papers)
  6. Wei Chen (1290 papers)
Citations (31)

Summary

We haven't generated a summary for this paper yet.