Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Federated Stochastic Gradient Descent Begets Self-Induced Momentum (2202.08402v1)

Published 17 Feb 2022 in cs.LG

Abstract: Federated learning (FL) is an emerging machine learning method that can be applied in mobile edge systems, in which a server and a host of clients collaboratively train a statistical model utilizing the data and computation resources of the clients without directly exposing their privacy-sensitive data. We show that running stochastic gradient descent (SGD) in such a setting can be viewed as adding a momentum-like term to the global aggregation process. Based on this finding, we further analyze the convergence rate of a federated learning system by accounting for the effects of parameter staleness and communication resources. These results advance the understanding of the Federated SGD algorithm, and also forges a link between staleness analysis and federated computing systems, which can be useful for systems designers.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Howard H. Yang (65 papers)
  2. Zuozhu Liu (78 papers)
  3. Yaru Fu (25 papers)
  4. Tony Q. S. Quek (237 papers)
  5. H. Vincent Poor (884 papers)
Citations (3)