Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Balancing Privacy and Performance for Private Federated Learning Algorithms (2304.05127v2)

Published 11 Apr 2023 in cs.CR, cs.CV, cs.LG, and eess.IV

Abstract: Federated learning (FL) is a distributed ML framework where multiple clients collaborate to train a model without exposing their private data. FL involves cycles of local computations and bi-directional communications between the clients and server. To bolster data security during this process, FL algorithms frequently employ a differential privacy (DP) mechanism that introduces noise into each client's model updates before sharing. However, while enhancing privacy, the DP mechanism often hampers convergence performance. In this paper, we posit that an optimal balance exists between the number of local steps and communication rounds, one that maximizes the convergence performance within a given privacy budget. Specifically, we present a proof for the optimal number of local steps and communication rounds that enhance the convergence bounds of the DP version of the ScaffNew algorithm. Our findings reveal a direct correlation between the optimal number of local steps, communication rounds, and a set of variables, e.g the DP privacy budget and other problem parameters, specifically in the context of strongly convex optimization. We furthermore provide empirical evidence to validate our theoretical findings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Xiangjian Hou (2 papers)
  2. Sarit Khirirat (13 papers)
  3. Mohammad Yaqub (77 papers)
  4. Samuel Horvath (20 papers)

Summary

We haven't generated a summary for this paper yet.