Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Federated Select: A Primitive for Communication- and Memory-Efficient Federated Learning (2208.09432v1)

Published 19 Aug 2022 in cs.LG and cs.DC

Abstract: Federated learning (FL) is a framework for machine learning across heterogeneous client devices in a privacy-preserving fashion. To date, most FL algorithms learn a "global" server model across multiple rounds. At each round, the same server model is broadcast to all participating clients, updated locally, and then aggregated across clients. In this work, we propose a more general procedure in which clients "select" what values are sent to them. Notably, this allows clients to operate on smaller, data-dependent slices. In order to make this practical, we outline a primitive, federated select, which enables client-specific selection in realistic FL systems. We discuss how to use federated select for model training and show that it can lead to drastic reductions in communication and client memory usage, potentially enabling the training of models too large to fit on-device. We also discuss the implications of federated select on privacy and trust, which in turn affect possible system constraints and design. Finally, we discuss open questions concerning model architectures, privacy-preserving technologies, and practical FL systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Zachary Charles (33 papers)
  2. Kallista Bonawitz (4 papers)
  3. Stanislav Chiknavaryan (2 papers)
  4. Brendan McMahan (11 papers)
  5. Blaise Agüera y Arcas (11 papers)
Citations (13)

Summary

We haven't generated a summary for this paper yet.