Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FedSKETCH: Communication-Efficient and Private Federated Learning via Sketching (2008.04975v1)

Published 11 Aug 2020 in stat.ML, cs.DS, and cs.LG

Abstract: Communication complexity and privacy are the two key challenges in Federated Learning where the goal is to perform a distributed learning through a large volume of devices. In this work, we introduce FedSKETCH and FedSKETCHGATE algorithms to address both challenges in Federated learning jointly, where these algorithms are intended to be used for homogeneous and heterogeneous data distribution settings respectively. The key idea is to compress the accumulation of local gradients using count sketch, therefore, the server does not have access to the gradients themselves which provides privacy. Furthermore, due to the lower dimension of sketching used, our method exhibits communication-efficiency property as well. We provide, for the aforementioned schemes, sharp convergence guarantees. Finally, we back up our theory with various set of experiments.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Farzin Haddadpour (14 papers)
  2. Belhal Karimi (14 papers)
  3. Ping Li (421 papers)
  4. Xiaoyun Li (24 papers)
Citations (31)

Summary

We haven't generated a summary for this paper yet.