Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GlueFL: Reconciling Client Sampling and Model Masking for Bandwidth Efficient Federated Learning (2212.01523v1)

Published 3 Dec 2022 in cs.LG and cs.DC

Abstract: Federated learning (FL) is an effective technique to directly involve edge devices in machine learning training while preserving client privacy. However, the substantial communication overhead of FL makes training challenging when edge devices have limited network bandwidth. Existing work to optimize FL bandwidth overlooks downstream transmission and does not account for FL client sampling. In this paper we propose GlueFL, a framework that incorporates new client sampling and model compression algorithms to mitigate low download bandwidths of FL clients. GlueFL prioritizes recently used clients and bounds the number of changed positions in compression masks in each round. Across three popular FL datasets and three state-of-the-art strategies, GlueFL reduces downstream client bandwidth by 27% on average and reduces training time by 29% on average.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Shiqi He (11 papers)
  2. Qifan Yan (2 papers)
  3. Feijie Wu (14 papers)
  4. Lanjun Wang (36 papers)
  5. Ivan Beschastnikh (24 papers)
  6. Mathias Lécuyer (17 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.