Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

User Scheduling for Federated Learning Through Over-the-Air Computation (2108.02891v1)

Published 5 Aug 2021 in cs.LG, cs.AI, and eess.SP

Abstract: A new ML technique termed as federated learning (FL) aims to preserve data at the edge devices and to only exchange ML model parameters in the learning process. FL not only reduces the communication needs but also helps to protect the local privacy. Although FL has these advantages, it can still experience large communication latency when there are massive edge devices connected to the central parameter server (PS) and/or millions of model parameters involved in the learning process. Over-the-air computation (AirComp) is capable of computing while transmitting data by allowing multiple devices to send data simultaneously by using analog modulation. To achieve good performance in FL through AirComp, user scheduling plays a critical role. In this paper, we investigate and compare different user scheduling policies, which are based on various criteria such as wireless channel conditions and the significance of model updates. Receiver beamforming is applied to minimize the mean-square-error (MSE) of the distortion of function aggregation result via AirComp. Simulation results show that scheduling based on the significance of model updates has smaller fluctuations in the training process while scheduling based on channel condition has the advantage on energy efficiency.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Xiang Ma (95 papers)
  2. Haijian Sun (42 papers)
  3. Qun Wang (146 papers)
  4. Rose Qingyang Hu (61 papers)
Citations (21)

Summary

We haven't generated a summary for this paper yet.