Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Communication-Efficient Diffusion Strategy for Performance Improvement of Federated Learning with Non-IID Data (2207.07493v4)

Published 15 Jul 2022 in cs.DC and cs.LG

Abstract: In 6G mobile communication systems, various AI-based network functions and applications have been standardized. Federated learning (FL) is adopted as the core learning architecture for 6G systems to avoid privacy leakage from mobile user data. However, in FL, users with non-independent and identically distributed (non-IID) datasets can deteriorate the performance of the global model because the convergence direction of the gradient for each dataset is different, thereby inducing a weight divergence problem. To address this problem, we propose a novel diffusion strategy for ML models (FedDif) to maximize the performance of the global model with non-IID data. FedDif enables the local model to learn different distributions before parameter aggregation by passing the local models through users via device-to-device communication. Furthermore, we theoretically demonstrate that FedDif can circumvent the weight-divergence problem. Based on this theory, we propose a communication-efficient diffusion strategy for ML models that can determine the trade-off between learning performance and communication cost using auction theory. The experimental results show that FedDif improves the top-1 test accuracy by up to 34.89\% and reduces communication costs by 14.6% to a maximum of 63.49%.

Citations (1)

Summary

We haven't generated a summary for this paper yet.