Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

HCFL: A High Compression Approach for Communication-Efficient Federated Learning in Very Large Scale IoT Networks (2204.06760v2)

Published 14 Apr 2022 in cs.LG, cs.AI, and cs.DC

Abstract: Federated learning (FL) is a new artificial intelligence concept that enables Internet-of-Things (IoT) devices to learn a collaborative model without sending the raw data to centralized nodes for processing. Despite numerous advantages, low computing resources at IoT devices and high communication costs for exchanging model parameters make applications of FL in massive IoT networks very limited. In this work, we develop a novel compression scheme for FL, called high-compression federated learning (HCFL), for very large scale IoT networks. HCFL can reduce the data load for FL processes without changing their structure and hyperparameters. In this way, we not only can significantly reduce communication costs, but also make intensive learning processes more adaptable on low-computing resource IoT devices. Furthermore, we investigate a relationship between the number of IoT devices and the convergence level of the FL model and thereby better assess the quality of the FL process. We demonstrate our HCFL scheme in both simulations and mathematical analyses. Our proposed theoretical research can be used as a minimum level of satisfaction, proving that the FL process can achieve good performance when a determined configuration is met. Therefore, we show that HCFL is applicable in any FL-integrated networks with numerous IoT devices.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Minh-Duong Nguyen (9 papers)
  2. Sang-Min Lee (3 papers)
  3. Quoc-Viet Pham (66 papers)
  4. Dinh Thai Hoang (125 papers)
  5. Diep N. Nguyen (86 papers)
  6. Won-Joo Hwang (22 papers)
Citations (21)

Summary

We haven't generated a summary for this paper yet.