2000 character limit reached
Unbounded Gradients in Federated Learning with Buffered Asynchronous Aggregation (2210.01161v1)
Published 3 Oct 2022 in cs.LG, cs.AI, cs.DC, math.OC, and stat.ML
Abstract: Synchronous updates may compromise the efficiency of cross-device federated learning once the number of active clients increases. The \textit{FedBuff} algorithm (Nguyen et al., 2022) alleviates this problem by allowing asynchronous updates (staleness), which enhances the scalability of training while preserving privacy via secure aggregation. We revisit the \textit{FedBuff} algorithm for asynchronous federated learning and extend the existing analysis by removing the boundedness assumptions from the gradient norm. This paper presents a theoretical analysis of the convergence rate of this algorithm when heterogeneity in data, batch size, and delay are considered.