Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Dealing Doubt: Unveiling Threat Models in Gradient Inversion Attacks under Federated Learning, A Survey and Taxonomy (2405.10376v1)

Published 16 May 2024 in cs.CR and cs.AI

Abstract: Federated Learning (FL) has emerged as a leading paradigm for decentralized, privacy preserving machine learning training. However, recent research on gradient inversion attacks (GIAs) have shown that gradient updates in FL can leak information on private training samples. While existing surveys on GIAs have focused on the honest-but-curious server threat model, there is a dearth of research categorizing attacks under the realistic and far more privacy-infringing cases of malicious servers and clients. In this paper, we present a survey and novel taxonomy of GIAs that emphasize FL threat models, particularly that of malicious servers and clients. We first formally define GIAs and contrast conventional attacks with the malicious attacker. We then summarize existing honest-but-curious attack strategies, corresponding defenses, and evaluation metrics. Critically, we dive into attacks with malicious servers and clients to highlight how they break existing FL defenses, focusing specifically on reconstruction methods, target model architectures, target data, and evaluation metrics. Lastly, we discuss open problems and future research directions.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (48)
  1. Reconstructing individual data points in federated learning hardened with differential privacy and secure aggregation, 2023.
  2. When the curious abandon honesty: Federated learning is not private. IEEE, 2023.
  3. Practical secure aggregation for federated learning on user-held data. arXiv preprint arXiv:1611.04482, 2016.
  4. Towards federated learning at scale: System design. Proceedings of machine learning and systems, 1:374–388, 2019.
  5. Broadening the scope of differential privacy using metrics. In Privacy Enhancing Technologies: 13th International Symposium, PETS 2013, Bloomington, IN, USA, July 10-12, 2013. Proceedings 13, pages 82–102. Springer, 2013.
  6. Understanding training-data leakage from gradients in neural networks for image classification. arXiv preprint arXiv:2111.10178, 2021.
  7. An overview of federated deep learning privacy attacks and defensive strategies, 2020.
  8. Gifd: A generative gradient inversion method with feature domain optimization, 2023.
  9. Robbing the fed: Directly obtaining private data in federated learning with modified models. arXiv preprint arXiv:2110.13057, 2021.
  10. Decepticons: Corrupted transformers breach privacy in federated learning for language models. arXiv preprint arXiv:2201.12675, 2022.
  11. Inverting gradients-how easy is it to break privacy in federated learning? Advances in Neural Information Processing Systems, 33:16937–16947, 2020.
  12. Shuffled model of differential privacy in federated learning. In International Conference on Artificial Intelligence and Statistics, pages 2521–2529. PMLR, 2021.
  13. Recovering private text in federated learning of language models. Advances in Neural Information Processing Systems, 35:8130–8143, 2022.
  14. Federated learning with compression: Unified analysis and sharp guarantees, 2020.
  15. Do gradient inversion attacks make federated learning unsafe? IEEE Transactions on Medical Imaging, 42(7):2044–2056, 2023.
  16. Deep models under the gan: Information leakage from collaborative deep learning, 2017.
  17. How to make private distributed cardinality estimation practical, and get differential privacy for free. In 30th USENIX security symposium (USENIX Security 21), pages 965–982, 2021.
  18. Evaluating gradient inversion attacks and defenses in federated learning. Advances in Neural Information Processing Systems, 34:7232–7241, 2021.
  19. Instahide: Instance-hiding schemes for private distributed learning, 2021.
  20. Gradient inversion with generative image prior. Advances in neural information processing systems, 34, 2021.
  21. Cafe: Catastrophic data leakage in vertical federated learning. Advances in Neural Information Processing Systems, 34:994–1006, 2021.
  22. Fedml-he: An efficient homomorphic-encryption-based privacy-preserving federated learning system, 2023.
  23. Cocktail party attack: Breaking aggregation-based privacy in federated learning using independent component analysis, 2022.
  24. Pufferfish: A framework for mathematical privacy definitions. ACM Transactions on Database Systems (TODS), 39(1):1–36, 2014.
  25. Federated optimization: Distributed optimization beyond the datacenter. arXiv preprint arXiv:1511.03575, 2015.
  26. Gradient disaggregation: Breaking privacy in federated learning by reconstructing the user participant matrix. In International Conference on Machine Learning, pages 5959–5968. PMLR, 2021.
  27. An experimental study of byzantine-robust aggregation schemes in federated learning. IEEE Transactions on Big Data, page 1–13, 2023.
  28. April: Finding the achilles’ heel on privacy for vision transformers, 2021.
  29. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273–1282. PMLR, 2017.
  30. Layer-wise characterization of latent information leakage in federated learning, 2021.
  31. Absolute variation distance: an inversion attack evaluation metric for federated learning. In International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023, 2023.
  32. Eluding secure aggregation in federated learning via model inconsistency. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, pages 2429–2443, 2022.
  33. Posthoc privacy guarantees for collaborative inference with modified propose-test-release. In Thirty-seventh Conference on Neural Information Processing Systems, 2023.
  34. Federated learning attacks revisited: A critical discussion of gaps, assumptions, and evaluation setups. Sensors, 23(1):31, 2022.
  35. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600–612, 2004.
  36. Federated learning with differential privacy: Algorithms and performance analysis.
  37. A framework for evaluating gradient leakage attacks in federated learning. arXiv preprint arXiv:2004.10397, 2020.
  38. Client-side gradient inversion against federated learning from poisoning. arXiv preprint arXiv:2309.07415, 2023.
  39. Fishing for user data in large-batch federated learning via gradient magnification. arXiv preprint arXiv:2202.00580, 2022.
  40. Learning to invert: Simple adaptive attacks for gradient inversion in federated learning, 2023.
  41. A theory of usable information under computational constraints. In International Conference on Learning Representations, 2020.
  42. See through gradients: Image batch recovery via gradinversion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16337–16346, 2021.
  43. Gradient obfuscation gives a false sense of security in federated learning. In 32nd USENIX Security Symposium, 2023.
  44. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
  45. A survey on gradient inversion: Attacks, defenses and future directions. arXiv preprint arXiv:2206.07284, 2022.
  46. idlg: Improved deep leakage from gradients. arXiv preprint arXiv:2001.02610, 2020.
  47. R-gap: Recursive gradient attack on privacy, 2021.
  48. Deep leakage from gradients. Advances in neural information processing systems, 32, 2019.

Summary

We haven't generated a summary for this paper yet.