Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Semi-Synchronous Federated Learning for Energy-Efficient Training and Accelerated Convergence in Cross-Silo Settings (2102.02849v2)

Published 4 Feb 2021 in cs.LG

Abstract: There are situations where data relevant to machine learning problems are distributed across multiple locations that cannot share the data due to regulatory, competitiveness, or privacy reasons. Machine learning approaches that require data to be copied to a single location are hampered by the challenges of data sharing. Federated Learning (FL) is a promising approach to learn a joint model over all the available data across silos. In many cases, the sites participating in a federation have different data distributions and computational capabilities. In these heterogeneous environments, existing approaches exhibit poor performance: synchronous FL protocols are communication efficient, but have slow learning convergence and high energy cost; conversely, asynchronous FL protocols have faster convergence with lower energy cost, but higher communication. In this work, we introduce a novel energy-efficient Semi-Synchronous Federated Learning protocol that mixes local models periodically with minimal idle time and fast convergence. We show through extensive experiments over established benchmark datasets in the computer-vision domain as well as in real-world biomedical settings that our approach significantly outperforms previous work in data and computationally heterogeneous environments.

Citations (30)

Summary

We haven't generated a summary for this paper yet.