Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PriPrune: Quantifying and Preserving Privacy in Pruned Federated Learning (2310.19958v2)

Published 30 Oct 2023 in cs.LG, cs.CR, cs.IT, and math.IT

Abstract: Federated learning (FL) is a paradigm that allows several client devices and a server to collaboratively train a global model, by exchanging only model updates, without the devices sharing their local training data. These devices are often constrained in terms of communication and computation resources, and can further benefit from model pruning -- a paradigm that is widely used to reduce the size and complexity of models. Intuitively, by making local models coarser, pruning is expected to also provide some protection against privacy attacks in the context of FL. However this protection has not been previously characterized, formally or experimentally, and it is unclear if it is sufficient against state-of-the-art attacks. In this paper, we perform the first investigation of privacy guarantees for model pruning in FL. We derive information-theoretic upper bounds on the amount of information leaked by pruned FL models. We complement and validate these theoretical findings, with comprehensive experiments that involve state-of-the-art privacy attacks, on several state-of-the-art FL pruning schemes, using benchmark datasets. This evaluation provides valuable insights into the choices and parameters that can affect the privacy protection provided by pruning. Based on these insights, we introduce PriPrune -- a privacy-aware algorithm for local model pruning, which uses a personalized per-client defense mask and adapts the defense pruning rate so as to jointly optimize privacy and model performance. PriPrune is universal in that can be applied after any pruned FL scheme on the client, without modification, and protects against any inversion attack by the server. Our empirical evaluation demonstrates that PriPrune significantly improves the privacy-accuracy tradeoff compared to state-of-the-art pruned FL schemes that do not take privacy into account.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (40)
  1. Learning representations by maximizing mutual information across views. Advances in neural information processing systems, 32.
  2. Mutual information neural estimation. In International Conference on Machine Learning(ICML), 531–540. PMLR.
  3. Federated dynamic sparse training: Computing less, communicating less, yet learning better. In Proceedings of the AAAI Conference on Artificial Intelligence, 6080–6088.
  4. Leaf: A benchmark for federated settings. In 33rd Conference on Neural Information Processing Systems (NeurIPS).
  5. The lottery ticket hypothesis for pre-trained bert networks. Advances in neural information processing systems(NeurIPS), 33: 15834–15846.
  6. Learning to prune deep neural networks via layer-wise optimal brain surgeon. In Advances in Neural Information Processing Systems(NeurIPS), volume 30.
  7. How Much Privacy Does Federated Learning with Secure Aggregation Guarantee? In Proceedings on Privacy Enhancing Technologies (PoPETs).
  8. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In International Conference on Learning Representations(ICLR).
  9. Inverting gradients-how easy is it to break privacy in federated learning? Advances in Neural Information Processing Systems(NeurIPS), 33: 16937–16947.
  10. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In International Conference on Learning Representations(ICLR).
  11. Learning both weights and connections for efficient neural network. In Advances in Neural Information Processing Systems(NeurIPS), volume 28.
  12. Second order derivatives for network pruning: Optimal brain surgeon. In Advances in Neural Information Processing Systems(NeurIPS), volume 5.
  13. Categorical reparametrization with gumble-softmax. In International Conference on Learning Representations(ICLR). OpenReview. net.
  14. Model pruning enables efficient federated learning on edge devices. IEEE Transactions on Neural Networks and Learning Systems.
  15. Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, 14(1–2): 1–210.
  16. Karnin, E. D. 1990. A simple procedure for pruning back-propagation trained neural networks. IEEE transactions on neural networks, 1(2): 239–242.
  17. Learning multiple layers of features from tiny images.
  18. Optimal brain damage. In Advances in Neural Information Processing Systems(NeurIPS), volume 2.
  19. SNIP: Single-shot network pruning based on connection sensitivity. In International Conference on Learning Representations(ICLR).
  20. Lotteryfl: Personalized and communication-efficient federated learning with lottery ticket hypothesis on non-iid datasets. arXiv:2008.03371.
  21. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine, 37(3): 50–60.
  22. Sanity checks for lottery tickets: Does your winning ticket really win the jackpot? Advances in Neural Information Processing Systems(NeurIPS), 34: 12749–12760.
  23. The concrete distribution: A continuous relaxation of discrete random variables. In International Conference on Learning Representations(ICLR).
  24. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, 1273–1282. PMLR.
  25. Learning Differentially Private Recurrent Language Models. In International Conference on Learning Representations(ICRL).
  26. Importance estimation for neural network pruning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 11264–11272.
  27. Pruning convolutional neural networks for resource efficient inference. In International Conference on Learning Representations(ICLR).
  28. Skeletonization: A technique for trimming the fat from a network via relevance assessment. In Advances in Neural Information Processing Systems(NeurIPS), volume 1.
  29. Dynamic Structure Pruning for Compressing CNNs. In The Thirty-Seventh AAAI Conference on Artificial Intelligence.
  30. Lookahead: A far-sighted alternative of magnitude-based pruning. In International Conference on Learning Representations(ICLR).
  31. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
  32. Lightsecagg: a lightweight and versatile design for secure aggregation in federated learning. Proceedings of Machine Learning and Systems, 4: 694–720.
  33. Pruning neural networks without any data by iteratively conserving synaptic flow. In Advances in Neural Information Processing Systems (NeurIPS), volume 33, 6377–6389.
  34. Elements of information theory. Wiley-Interscience.
  35. Picking winning tickets before training by preserving gradient flow. In International Conference on Learning Representations(ICLR).
  36. See through gradients: Image batch recovery via gradinversion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16337–16346.
  37. Nisp: Pruning networks using neuron importance score propagation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 9194–9203.
  38. Gradient Obfuscation Gives a False Sense of Security in Federated Learning. In Proceedings on USENIX Security.
  39. idlg: Improved deep leakage from gradients. arXiv:2001.02610.
  40. Deep leakage from gradients. Advances in Neural Information Processing Systems(NeurIPS), 32.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Tianyue Chu (3 papers)
  2. Mengwei Yang (4 papers)
  3. Nikolaos Laoutaris (25 papers)
  4. Athina Markopoulou (56 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.