Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Efficient Synchronous Federated Training: A Survey on System Optimization Strategies (2109.03999v3)

Published 9 Sep 2021 in cs.DC, cs.LG, and cs.NI

Abstract: The increasing demand for privacy-preserving collaborative learning has given rise to a new computing paradigm called federated learning (FL), in which clients collaboratively train a ML model without revealing their private training data. Given an acceptable level of privacy guarantee, the goal of FL is to minimize the time-to-accuracy of model training. Compared with distributed ML in data centers, there are four distinct challenges to achieving short time-to-accuracy in FL training, namely the lack of information for optimization, the tradeoff between statistical and system utility, client heterogeneity, and large configuration space. In this paper, we survey recent works in addressing these challenges and present them following a typical training workflow through three phases: client selection, configuration, and reporting. We also review system works including measurement studies and benchmarking tools that aim to support FL developers.

Citations (22)

Summary

We haven't generated a summary for this paper yet.