Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Personalization Improves Privacy-Accuracy Tradeoffs in Federated Learning (2202.05318v2)

Published 10 Feb 2022 in stat.ML, cs.CR, cs.LG, and math.OC

Abstract: Large-scale machine learning systems often involve data distributed across a collection of users. Federated learning algorithms leverage this structure by communicating model updates to a central server, rather than entire datasets. In this paper, we study stochastic optimization algorithms for a personalized federated learning setting involving local and global models subject to user-level (joint) differential privacy. While learning a private global model induces a cost of privacy, local learning is perfectly private. We provide generalization guarantees showing that coordinating local learning with private centralized learning yields a generically useful and improved tradeoff between accuracy and privacy. We illustrate our theoretical results with experiments on synthetic and real-world datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Alberto Bietti (35 papers)
  2. Chen-Yu Wei (46 papers)
  3. John Langford (94 papers)
  4. Zhiwei Steven Wu (143 papers)
  5. Miroslav Dudík (22 papers)
Citations (31)

Summary

We haven't generated a summary for this paper yet.