Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Collusion Resistant Federated Learning with Oblivious Distributed Differential Privacy (2202.09897v1)

Published 20 Feb 2022 in cs.CR, cs.AI, and cs.MA

Abstract: Privacy-preserving federated learning enables a population of distributed clients to jointly learn a shared model while keeping client training data private, even from an untrusted server. Prior works do not provide efficient solutions that protect against collusion attacks in which parties collaborate to expose an honest client's model parameters. We present an efficient mechanism based on oblivious distributed differential privacy that is the first to protect against such client collusion, including the "Sybil" attack in which a server preferentially selects compromised devices or simulates fake devices. We leverage the novel privacy mechanism to construct a secure federated learning protocol and prove the security of that protocol. We conclude with empirical analysis of the protocol's execution speed, learning accuracy, and privacy performance on two data sets within a realistic simulation of 5,000 distributed network clients.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. David Byrd (11 papers)
  2. Vaikkunth Mugunthan (13 papers)
  3. Antigoni Polychroniadou (17 papers)
  4. Tucker Hybinette Balch (6 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.