Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Accelerating Federated Learning with a Global Biased Optimiser (2108.09134v3)

Published 20 Aug 2021 in cs.LG and cs.DC

Abstract: Federated Learning (FL) is a recent development in distributed machine learning that collaboratively trains models without training data leaving client devices, preserving data privacy. In real-world FL, the training set is distributed over clients in a highly non-Independent and Identically Distributed (non-IID) fashion, harming model convergence speed and final performance. To address this challenge, we propose a novel, generalised approach for incorporating adaptive optimisation into FL with the Federated Global Biased Optimiser (FedGBO) algorithm. FedGBO accelerates FL by employing a set of global biased optimiser values during training, reducing 'client-drift' from non-IID data whilst benefiting from adaptive optimisation. We show that in FedGBO, updates to the global model can be reformulated as centralised training using biased gradients and optimiser updates, and apply this framework to prove FedGBO's convergence on nonconvex objectives when using the momentum-SGD (SGDm) optimiser. We also conduct extensive experiments using 4 FL benchmark datasets (CIFAR100, Sent140, FEMNIST, Shakespeare) and 3 popular optimisers (SGDm, RMSProp, Adam) to compare FedGBO against six state-of-the-art FL algorithms. The results demonstrate that FedGBO displays superior or competitive performance across the datasets whilst having low data-upload and computational costs, and provide practical insights into the trade-offs associated with different adaptive-FL algorithms and optimisers.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jed Mills (4 papers)
  2. Jia Hu (41 papers)
  3. Geyong Min (35 papers)
  4. Rui Jin (19 papers)
  5. Siwei Zheng (1 paper)
  6. Jin Wang (356 papers)
Citations (6)