Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Bounding Reconstruction Attack Success of Adversaries Without Data Priors (2402.12861v1)

Published 20 Feb 2024 in cs.LG and cs.CR

Abstract: Reconstruction attacks on ML models pose a strong risk of leakage of sensitive data. In specific contexts, an adversary can (almost) perfectly reconstruct training data samples from a trained model using the model's gradients. When training ML models with differential privacy (DP), formal upper bounds on the success of such reconstruction attacks can be provided. So far, these bounds have been formulated under worst-case assumptions that might not hold high realistic practicality. In this work, we provide formal upper bounds on reconstruction success under realistic adversarial settings against ML models trained with DP and support these bounds with empirical results. With this, we show that in realistic scenarios, (a) the expected reconstruction success can be bounded appropriately in different contexts and by different metrics, which (b) allows for a more educated choice of a privacy parameter.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)
  1. Robbing the fed: Directly obtaining private data in federated learning with modified models. Tenth International Conference on Learning Representations, 2022.
  2. When the curious abandon honesty: Federated learning is not private. In 2023 IEEE 8th European Symposium on Security and Privacy (EuroS&P), pages 175–199. IEEE, 2023.
  3. Stochastic gradient descent with differentially private updates. In 2013 IEEE global conference on signal and information processing, pages 245–248. IEEE, 2013.
  4. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pages 308–318, 2016.
  5. Bounding training data reconstruction in private (deep) learning. In International Conference on Machine Learning, pages 8056–8071. PMLR, 2022.
  6. Reconstructing training data with informed adversaries. In 2022 IEEE Symposium on Security and Privacy (SP), pages 1138–1156. IEEE, 2022.
  7. Bounding training data reconstruction in dp-sgd. In Thirty-seventh Conference on Neural Information Processing Systems, 2023.
  8. Bounding data reconstruction attacks with the hypothesis testing interpretation of differential privacy. Theory and Practice of Differential Privacy, 2023a.
  9. Optimal privacy guarantees for a relaxed threat model: Addressing sub-optimal adversaries in differentially private machine learning. In Thirty-seventh Conference on Neural Information Processing Systems, 2023b.
  10. Adversary instantiation: Lower bounds for differentially private machine learning. In 2021 IEEE Symposium on security and privacy (SP), pages 866–882. IEEE, 2021.
  11. Thirteen ways to look at the correlation coefficient. The American Statistician, 42(1):59–66, 1988. doi: 10.1080/00031305.1988.10475524. URL https://doi.org/10.1080/00031305.1988.10475524.
  12. Zen and the art of model adaptation: Low-utility-cost attack mitigations in collaborative machine learning. Proc. Priv. Enhancing Technol., 2022(1):274–290, 2022.
  13. Gaussian differential privacy. Journal of the Royal Statistical Society Series B: Statistical Methodology, 84(1):3–37, 2022. URL https://academic.oup.com/jrsssb/article/84/1/3/7056089.
  14. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
  15. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840–6851, 2020.
  16. Denoising diffusion implicit models. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 586–595, 2020.
  17. Reconstructing training data from trained neural networks. Advances in Neural Information Processing Systems, 35:22911–22924, 2022.
  18. Extracting training data from diffusion models. In 32nd USENIX Security Symposium (USENIX Security 23), pages 5253–5270, 2023.
  19. Thomas M Cover. Elements of information theory. John Wiley & Sons, 1999.
  20. An overlap invariant entropy measure of 3d medical image alignment. Pattern recognition, 32(1):71–86, 1999.
Citations (1)

Summary

We haven't generated a summary for this paper yet.