Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Accuracy-Privacy Trade-off in the Mitigation of Membership Inference Attack in Federated Learning (2407.19119v1)

Published 26 Jul 2024 in cs.LG, cs.AI, and cs.CR

Abstract: Over the last few years, federated learning (FL) has emerged as a prominent method in machine learning, emphasizing privacy preservation by allowing multiple clients to collaboratively build a model while keeping their training data private. Despite this focus on privacy, FL models are susceptible to various attacks, including membership inference attacks (MIAs), posing a serious threat to data confidentiality. In a recent study, Rezaei \textit{et al.} revealed the existence of an accuracy-privacy trade-off in deep ensembles and proposed a few fusion strategies to overcome it. In this paper, we aim to explore the relationship between deep ensembles and FL. Specifically, we investigate whether confidence-based metrics derived from deep ensembles apply to FL and whether there is a trade-off between accuracy and privacy in FL with respect to MIA. Empirical investigations illustrate a lack of a non-monotonic correlation between the number of clients and the accuracy-privacy trade-off. By experimenting with different numbers of federated clients, datasets, and confidence-metric-based fusion strategies, we identify and analytically justify the clear existence of the accuracy-privacy trade-off.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Sayyed Farid Ahamed (5 papers)
  2. Soumya Banerjee (30 papers)
  3. Sandip Roy (46 papers)
  4. Devin Quinn (4 papers)
  5. Marc Vucovich (8 papers)
  6. Kevin Choi (15 papers)
  7. Abdul Rahman (27 papers)
  8. Alison Hu (5 papers)
  9. Edward Bowen (25 papers)
  10. Sachin Shetty (17 papers)

Summary

We haven't generated a summary for this paper yet.