Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Pollen: High-throughput Federated Learning Simulation via Resource-Aware Client Placement (2306.17453v3)

Published 30 Jun 2023 in cs.DC

Abstract: Federated Learning (FL) is a privacy-focused machine learning paradigm that collaboratively trains models directly on edge devices. Simulation plays an essential role in FL adoption, helping develop novel aggregation and client sampling strategies. However, current simulators cannot emulate large-scale systems in a time-efficient manner, which limits their utility and casts doubts on generalizability. This work proposes Pollen, a novel resource-aware system for speeding up simulations. Pollen addresses two limiting factors from existing simulators: (a) communication inefficiency derived from pull-based client execution and (b) inadequate load balance when using heterogeneous hardware. Pollen executes high-throughput FL simulations at scale by (a) using a push-based client placement system, (b) learning how an adaptable scheduling of clients based on hardware statistics (c) estimating the optimal number of concurrent workers per GPU. We evaluate Pollen on four representative FL tasks and show that Pollen's placement model increases GPU utilization and reduces idle time. We compare Pollen to Flower, Flute, FedScale, Parrot, and pfl and show experimental speed-ups of days or weeks.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets