Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adaptive Federated Learning With Gradient Compression in Uplink NOMA (2003.01344v1)

Published 3 Mar 2020 in cs.NI and eess.SP

Abstract: Federated learning (FL) is an emerging machine learning technique that aggregates model attributes from a large number of distributed devices. Several unique features such as energy saving and privacy preserving make FL a highly promising learning approach for power-limited and privacy sensitive devices. Although distributed computing can lower down the information amount that needs to be uploaded, model updates in FL can still experience performance bottleneck, especially for updates via wireless connections. In this work, we investigate the performance of FL update with mobile edge devices that are connected to the parameter server (PS) with practical wireless links, where uplink update from user to PS has very limited capacity. Different from the existing works, we apply non-orthogonal multiple access (NOMA) together with gradient compression in the wireless uplink. Simulation results show that our proposed scheme can significantly reduce aggregation latency while achieving similar accuracy.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Haijian Sun (42 papers)
  2. Xiang Ma (95 papers)
  3. Rose Qingyang Hu (61 papers)
Citations (76)

Summary

We haven't generated a summary for this paper yet.