Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup for Non-IID Data (2309.09719v1)

Published 18 Sep 2023 in cs.LG, cs.DC, and math.OC

Abstract: Federated learning is an emerging distributed machine learning method, enables a large number of clients to train a model without exchanging their local data. The time cost of communication is an essential bottleneck in federated learning, especially for training large-scale deep neural networks. Some communication-efficient federated learning methods, such as FedAvg and FedAdam, share the same learning rate across different clients. But they are not efficient when data is heterogeneous. To maximize the performance of optimization methods, the main challenge is how to adjust the learning rate without hurting the convergence. In this paper, we propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate based on local historical gradient squares and synchronized learning rates. Theoretical analysis shows that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients, which enables promising scalability in federated optimization. We also empirically compare our method with several communication-efficient federated optimization methods. Extensive experimental results on Computer Vision (CV) tasks and NLP task show the efficacy of our proposed FedLALR method and also coincides with our theoretical findings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Hao Sun (383 papers)
  2. Li Shen (363 papers)
  3. Shixiang Chen (18 papers)
  4. Jingwei Sun (31 papers)
  5. Jing Li (621 papers)
  6. Guangzhong Sun (18 papers)
  7. Dacheng Tao (829 papers)
Citations (1)