Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Achieving Linear Speedup with Partial Worker Participation in Non-IID Federated Learning (2101.11203v3)

Published 27 Jan 2021 in cs.LG and cs.DC

Abstract: Federated learning (FL) is a distributed machine learning architecture that leverages a large number of workers to jointly learn a model with decentralized data. FL has received increasing attention in recent years thanks to its data privacy protection, communication efficiency and a linear speedup for convergence in training (i.e., convergence performance increases linearly with respect to the number of workers). However, existing studies on linear speedup for convergence are only limited to the assumptions of i.i.d. datasets across workers and/or full worker participation, both of which rarely hold in practice. So far, it remains an open question whether or not the linear speedup for convergence is achievable under non-i.i.d. datasets with partial worker participation in FL. In this paper, we show that the answer is affirmative. Specifically, we show that the federated averaging (FedAvg) algorithm (with two-sided learning rates) on non-i.i.d. datasets in non-convex settings achieves a convergence rate $\mathcal{O}(\frac{1}{\sqrt{mKT}} + \frac{1}{T})$ for full worker participation and a convergence rate $\mathcal{O}(\frac{\sqrt{K}}{\sqrt{nT}} + \frac{1}{T})$ for partial worker participation, where $K$ is the number of local steps, $T$ is the number of total communication rounds, $m$ is the total worker number and $n$ is the worker number in one communication round if for partial worker participation. Our results also reveal that the local steps in FL could help the convergence and show that the maximum number of local steps can be improved to $T/m$ in full worker participation. We conduct extensive experiments on MNIST and CIFAR-10 to verify our theoretical results.

Achieving Linear Speedup with Partial Worker Participation in Non-IID Federated Learning

This paper addresses a key challenge in federated learning (FL), which is the achievement of linear speedup in convergence despite the presence of non-independent and identically distributed (non-i.i.d.) datasets and partial worker participation—conditions that frequently occur in realistic FL settings. The authors' effort to extend the analysis of the FedAvg algorithm under these practical conditions is a significant contribution to the field.

Summary of Contributions

The authors propose a generalized FedAvg algorithm with two-sided learning rates, which effectively achieves linear speedup with non-i.i.d. datasets under both full and partial worker participation. The theoretical analysis demonstrates a convergence rate of O(1mKT)O(\frac{1}{\sqrt{mKT}}) for full participation and O(1nKT)O(\frac{1}{\sqrt{nKT}}) for partial participation, where KK is the number of local SGD steps, TT is the total number of communication rounds, mm is the total number of workers, and nn is the number of workers participating in each communication round.

The theoretical insights are supported by extensive experimental results on MNIST and CIFAR-10 datasets, which confirm that the proposed algorithm can efficiently mitigate the convergence slowdown caused by statistical heterogeneity (non-i.i.d. data). Additionally, their experiments illustrate how the choice of hyper-parameters, number of participating workers, and local steps impact performance.

Numerical Results and Claims

The authors highlight notable findings through their experiments:

  • The proposed FedAvg algorithm achieves comparable convergence with both full and partial worker participation irrespective of data heterogeneity, provided the learning rate is set appropriately.
  • Partial participation introduces an additional variance component, yet it does not fundamentally alter the convergence order, preserving linear speedup characteristics.
  • The maximum number of local steps that promotes convergence is improved to T/mT/m, significantly reducing communication overhead relative to previous work with a bound of T1/3/mT^{1/3}/m.

Practical and Theoretical Implications

  1. Practical Implications:
    • The insights gained from this paper potentially reduce communication costs significantly. By optimizing local computation (number of local SGD steps), the requirement for frequent communication is diminished, which is crucial for real-world FL applications involving distributed devices with varying availability and communication constraints.
    • The ability to maintain linear speedup with reduced synchronization complexity broadens the applicability of FL to more heterogeneous and dynamic environments, particularly mobile and edge computing scenarios.
  2. Theoretical Implications:
    • This paper lays the groundwork for subsequent studies to further optimize FL algorithms under practical constraints. The explicit decoupling between server and worker learning rates provides new opportunities for fine-tuning FL models regarding convergence behavior.
    • The improved understanding of data heterogeneity's impact paves the way for future research into adaptive learning algorithms that adjust dynamically to varying non-i.i.d. levels and system non-stationarity.
  3. Future Directions:
    • Further explorations could focus on optimizing sampling strategies for worker participation, balancing between worker availability and minimizing variance impact without exacerbating communication costs.
    • Exploring alternative mechanisms or control variates (beyond SCAFFOLD) to further reduce gradient variance and improve convergence speed without increasing communication complexity might be another avenue of research.

This investigation into FL's resilience to non-i.i.d. data distributions and partial worker participation represents a crucial stride in the continued endeavor to make federated learning a robust and practical solution for distributed machine learning applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Haibo Yang (38 papers)
  2. Minghong Fang (34 papers)
  3. Jia Liu (369 papers)
Citations (231)