Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Privacy-Preserving Federated Learning on Partitioned Attributes (2104.14383v1)

Published 29 Apr 2021 in cs.LG and cs.CR

Abstract: Real-world data is usually segmented by attributes and distributed across different parties. Federated learning empowers collaborative training without exposing local data or models. As we demonstrate through designed attacks, even with a small proportion of corrupted data, an adversary can accurately infer the input attributes. We introduce an adversarial learning based procedure which tunes a local model to release privacy-preserving intermediate representations. To alleviate the accuracy decline, we propose a defense method based on the forward-backward splitting algorithm, which respectively deals with the accuracy loss and privacy loss in the forward and backward gradient descent steps, achieving the two objectives simultaneously. Extensive experiments on a variety of datasets have shown that our defense significantly mitigates privacy leakage with negligible impact on the federated learning task.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Shuang Zhang (132 papers)
  2. Liyao Xiang (21 papers)
  3. Xi Yu (25 papers)
  4. Pengzhi Chu (5 papers)
  5. Yingqi Chen (7 papers)
  6. Chen Cen (1 paper)
  7. Li Wang (470 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.