Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Separation of Powers in Federated Learning (2105.09400v1)

Published 19 May 2021 in cs.CR and cs.LG

Abstract: Federated Learning (FL) enables collaborative training among mutually distrusting parties. Model updates, rather than training data, are concentrated and fused in a central aggregation server. A key security challenge in FL is that an untrustworthy or compromised aggregation process might lead to unforeseeable information leakage. This challenge is especially acute due to recently demonstrated attacks that have reconstructed large fractions of training data from ostensibly "sanitized" model updates. In this paper, we introduce TRUDA, a new cross-silo FL system, employing a trustworthy and decentralized aggregation architecture to break down information concentration with regard to a single aggregator. Based on the unique computational properties of model-fusion algorithms, all exchanged model updates in TRUDA are disassembled at the parameter-granularity and re-stitched to random partitions designated for multiple TEE-protected aggregators. Thus, each aggregator only has a fragmentary and shuffled view of model updates and is oblivious to the model architecture. Our new security mechanisms can fundamentally mitigate training reconstruction attacks, while still preserving the final accuracy of trained models and keeping performance overheads low.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Pau-Chen Cheng (2 papers)
  2. Kevin Eykholt (16 papers)
  3. Zhongshu Gu (4 papers)
  4. Hani Jamjoom (9 papers)
  5. K. R. Jayaram (15 papers)
  6. Enriquillo Valdez (2 papers)
  7. Ashish Verma (31 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.