Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Local Adaptivity in Federated Learning: Convergence and Consistency (2106.02305v1)

Published 4 Jun 2021 in cs.LG, cs.DC, and stat.ML

Abstract: The federated learning (FL) framework trains a machine learning model using decentralized data stored at edge client devices by periodically aggregating locally trained models. Popular optimization algorithms of FL use vanilla (stochastic) gradient descent for both local updates at clients and global updates at the aggregating server. Recently, adaptive optimization methods such as AdaGrad have been studied for server updates. However, the effect of using adaptive optimization methods for local updates at clients is not yet understood. We show in both theory and practice that while local adaptive methods can accelerate convergence, they can cause a non-vanishing solution bias, where the final converged solution may be different from the stationary point of the global objective function. We propose correction techniques to overcome this inconsistency and complement the local adaptive methods for FL. Extensive experiments on realistic federated training tasks show that the proposed algorithms can achieve faster convergence and higher test accuracy than the baselines without local adaptivity.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jianyu Wang (84 papers)
  2. Zheng Xu (73 papers)
  3. Zachary Garrett (12 papers)
  4. Zachary Charles (33 papers)
  5. Luyang Liu (20 papers)
  6. Gauri Joshi (73 papers)
Citations (37)

Summary

We haven't generated a summary for this paper yet.