Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
149 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Depersonalized Federated Learning: Tackling Statistical Heterogeneity by Alternating Stochastic Gradient Descent (2210.03444v3)

Published 7 Oct 2022 in cs.LG

Abstract: Federated learning (FL), which has gained increasing attention recently, enables distributed devices to train a common ML model for intelligent inference cooperatively without data sharing. However, problems in practical networks, such as non-independent-and-identically-distributed (non-iid) raw data and limited bandwidth, give rise to slow and unstable convergence of the FL training process. To address these issues, we propose a new FL method that can significantly mitigate statistical heterogeneity through the depersonalization mechanism. Particularly, we decouple the global and local optimization objectives by alternating stochastic gradient descent, thus reducing the accumulated variance in local update phases to accelerate the FL convergence. Then we analyze the proposed method detailedly to show the proposed method converging at a sublinear speed in the general non-convex setting. Finally, numerical results are conducted with experiments on public datasets to verify the effectiveness of our proposed method.

Summary

We haven't generated a summary for this paper yet.