Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SAPAG: A Self-Adaptive Privacy Attack From Gradients (2009.06228v1)

Published 14 Sep 2020 in cs.LG, cs.CR, and stat.ML

Abstract: Distributed learning such as federated learning or collaborative learning enables model training on decentralized data from users and only collects local gradients, where data is processed close to its sources for data privacy. The nature of not centralizing the training data addresses the privacy issue of privacy-sensitive data. Recent studies show that a third party can reconstruct the true training data in the distributed machine learning system through the publicly-shared gradients. However, existing reconstruction attack frameworks lack generalizability on different Deep Neural Network (DNN) architectures and different weight distribution initialization, and can only succeed in the early training phase. To address these limitations, in this paper, we propose a more general privacy attack from gradient, SAPAG, which uses a Gaussian kernel based of gradient difference as a distance measure. Our experiments demonstrate that SAPAG can construct the training data on different DNNs with different weight initializations and on DNNs in any training phases.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yijue Wang (6 papers)
  2. Jieren Deng (12 papers)
  3. Dan Guo (66 papers)
  4. Chenghong Wang (17 papers)
  5. Xianrui Meng (6 papers)
  6. Hang Liu (135 papers)
  7. Caiwen Ding (98 papers)
  8. Sanguthevar Rajasekaran (21 papers)
Citations (33)

Summary

We haven't generated a summary for this paper yet.