Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Gradient and Channel Aware Dynamic Scheduling for Over-the-Air Computation in Federated Edge Learning Systems (2212.00491v1)

Published 1 Dec 2022 in eess.SY and cs.SY

Abstract: To satisfy the expected plethora of computation-heavy applications, federated edge learning (FEEL) is a new paradigm featuring distributed learning to carry the capacities of low-latency and privacy-preserving. To further improve the efficiency of wireless data aggregation and model learning, over-the-air computation (AirComp) is emerging as a promising solution by using the superposition characteristics of wireless channels. However, the fading and noise of wireless channels can cause aggregate distortions in AirComp enabled federated learning. In addition, the quality of collected data and energy consumption of edge devices may also impact the accuracy and efficiency of model aggregation as well as convergence. To solve these problems, this work proposes a dynamic device scheduling mechanism, which can select qualified edge devices to transmit their local models with a proper power control policy so as to participate the model training at the server in federated learning via AirComp. In this mechanism, the data importance is measured by the gradient of local model parameter, channel condition and energy consumption of the device jointly. In particular, to fully use distributed datasets and accelerate the convergence rate of federated learning, the local updates of unselected devices are also retained and accumulated for future potential transmission, instead of being discarded directly. Furthermore, the Lyapunov drift-plus-penalty optimization problem is formulated for searching the optimal device selection strategy. Simulation results validate that the proposed scheduling mechanism can achieve higher test accuracy and faster convergence rate, and is robust against different channel conditions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jun Du (130 papers)
  2. Bingqing Jiang (1 paper)
  3. Chunxiao Jiang (48 papers)
  4. Yuanming Shi (119 papers)
  5. Zhu Han (431 papers)
Citations (70)