Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

WW-FL: Secure and Private Large-Scale Federated Learning (2302.09904v3)

Published 20 Feb 2023 in cs.LG, cs.CR, cs.DC, cs.IT, and math.IT

Abstract: Federated learning (FL) is an efficient approach for large-scale distributed machine learning that promises data privacy by keeping training data on client devices. However, recent research has uncovered vulnerabilities in FL, impacting both security and privacy through poisoning attacks and the potential disclosure of sensitive information in individual model updates as well as the aggregated global model. This paper explores the inadequacies of existing FL protection measures when applied independently, and the challenges of creating effective compositions. Addressing these issues, we propose WW-FL, an innovative framework that combines secure multi-party computation (MPC) with hierarchical FL to guarantee data and global model privacy. One notable feature of WW-FL is its capability to prevent malicious clients from directly poisoning model parameters, confining them to less destructive data poisoning attacks. We furthermore provide a PyTorch-based FL implementation integrated with Meta's CrypTen MPC framework to systematically measure the performance and robustness of WW-FL. Our extensive evaluation demonstrates that WW-FL is a promising solution for secure and private large-scale federated learning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Felix Marx (1 paper)
  2. Thomas Schneider (53 papers)
  3. Ajith Suresh (14 papers)
  4. Tobias Wehrle (1 paper)
  5. Christian Weinert (8 papers)
  6. Hossein Yalame (6 papers)
Citations (2)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets