Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Make Landscape Flatter in Differentially Private Federated Learning (2303.11242v2)

Published 20 Mar 2023 in cs.LG, cs.CR, and cs.CV

Abstract: To defend the inference attacks and mitigate the sensitive information leakages in Federated Learning (FL), client-level Differentially Private FL (DPFL) is the de-facto standard for privacy protection by clipping local updates and adding random noise. However, existing DPFL methods tend to make a sharper loss landscape and have poorer weight perturbation robustness, resulting in severe performance degradation. To alleviate these issues, we propose a novel DPFL algorithm named DP-FedSAM, which leverages gradient perturbation to mitigate the negative impact of DP. Specifically, DP-FedSAM integrates Sharpness Aware Minimization (SAM) optimizer to generate local flatness models with better stability and weight perturbation robustness, which results in the small norm of local updates and robustness to DP noise, thereby improving the performance. From the theoretical perspective, we analyze in detail how DP-FedSAM mitigates the performance degradation induced by DP. Meanwhile, we give rigorous privacy guarantees with R\'enyi DP and present the sensitivity analysis of local updates. At last, we empirically confirm that our algorithm achieves state-of-the-art (SOTA) performance compared with existing SOTA baselines in DPFL. Code is available at https://github.com/YMJS-Irfan/DP-FedSAM

Definition Search Book Streamline Icon: https://streamlinehq.com
References (58)
  1. Deep learning with differential privacy. CoRR, 2016.
  2. Sharp-maml: Sharpness-aware model-agnostic meta learning. In International Conference on Machine Learning, ICML, pages 10–32, 2022.
  3. cpSGD: Communication-efficient and differentially-private distributed SGD. In Proc. Annual Conference on Neural Information Processing Systems (NeurIPS), pages 7575–7586, Dec. 2018.
  4. Towards understanding sharpness-aware minimization. In International Conference on Machine Learning, ICML, Proceedings of Machine Learning Research, pages 639–668. PMLR, 2022.
  5. Léon Bottou. Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT’2010, pages 177–186. Springer, 2010.
  6. Optimization methods for large-scale machine learning. Siam Review, 60(2):223–311, 2018.
  7. Improving generalization in federated learning by seeking flat minima. CoRR, abs/2203.11834, 2022.
  8. Differentially private federated learning with local regularization and sparsification. CoRR, 2022.
  9. Differentially private federated learning with local regularization and sparsification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022.
  10. Emnist: Extending mnist to handwritten letters. In 2017 International Joint Conference on Neural Networks (IJCNN), pages 2921–2926. IEEE, 2017.
  11. Efficient sharpness-aware minimization for improved training of neural networks. In International Conference on Learning Representations, 2021.
  12. The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci., pages 211–407, Aug. 2014.
  13. Sharpness-aware minimization for efficiently improving generalization. In International Conference on Learning Representations, 2021.
  14. Model inversion attacks that exploit confidence information and basic countermeasures. In Proc. ACM SIGSAC Conference on Computer and Communications Security (CCS), pages 1322–1333, 2015.
  15. Differentially private federated learning: A client level perspective. CoRR, Aug. 2017.
  16. Differentially private federated learning: A client level perspective. arXiv preprint arXiv:1712.07557, 2017.
  17. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM Journal on Optimization, 23(4):2341–2368, 2013.
  18. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  19. Measuring the effects of non-identical data distribution for federated visual classification. arXiv preprint arXiv:1909.06335, 2019.
  20. Federated learning with sparsification-amplified privacy and adaptive optimization. In Proc. Thirtieth International Joint Conference on Artificial Intelligence (IJCAI), pages 1463–1469, Aug. 2021.
  21. Federated learning with sparsified model perturbation: Improving accuracy under client-level differential privacy. CoRR, 2022.
  22. Federated learning with sparsified model perturbation: Improving accuracy under client-level differential privacy. arXiv preprint arXiv:2202.07178, 2022.
  23. Achieving personalized federated learning with sparse local models. arXiv preprint arXiv:2201.11380, 2022.
  24. Robust generalization against corruptions via worst-case sharpness minimization.
  25. The distributed discrete gaussian mechanism for federated learning with secure aggregation. In Proc. International Conference on Machine Learning (ICML), pages 5201–5212, Jul. 2021.
  26. The distributed discrete gaussian mechanism for federated learning with secure aggregation. ArXiv, abs/2102.06387, 2021.
  27. Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, pages 1–210, 2021.
  28. Learning multiple layers of features from tiny images. 2009.
  29. Asam: Adaptive sharpness-aware minimization for scale-invariant learning of deep neural networks. In International Conference on Machine Learning, pages 5905–5914. PMLR, 2021.
  30. Visualizing the loss landscape of neural nets. Advances in neural information processing systems, 31, 2018.
  31. Federated learning: Challenges, methods, and future directions. IEEE Signal Processing Magazine, pages 50–60, 2020.
  32. Towards efficient and scalable sharpness-aware minimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12360–12370, 2022.
  33. Communication-efficient learning of deep networks from decentralized data. In Proc. International Conference on Artificial Intelligence and Statistics (AISTATS), volume 54, pages 1273–1282, April. 2017.
  34. Learning differentially private recurrent language models. arXiv preprint arXiv:1710.06963, 2017.
  35. Learning differentially private recurrent language models. In Proc. International Conference on Learning Representations (ICLR), Apr. 2018.
  36. Learning differentially private recurrent language models. In ICLR, 2018.
  37. Exploiting unintended feature leakage in collaborative learning. In Proc. IEEE Symposium on Security and Privacy (SP), pages 691–706, 2019.
  38. Make sharpness-aware minimization stronger: A sparsified perturbation approach. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022.
  39. Ilya Mironov. Rényi differential privacy. In Proc. IEEE computer security foundations symposium (CSF), pages 263–275, 2017.
  40. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In Proc. IEEE Symposium on Security and Privacy (SP), pages 739–753, 2019.
  41. Generalized federated learning via sharpness aware minimization. In International Conference on Machine Learning, ICML, pages 18250–18280, 2022.
  42. Adaptive federated optimization. In International Conference on Learning Representations, 2021.
  43. Improving the model consistency of decentralized federated learning. arXiv preprint arXiv:2302.04083, 2023.
  44. Membership inference attacks against machine learning models. In Proc. IEEE Symposium on Security and Privacy (SP), pages 3–18, 2017.
  45. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  46. Adasam: Boosting sharpness-aware minimization with adaptive learning rate and momentum for training deep neural networks. arXiv preprint arXiv:2303.00565, 2023.
  47. Federated model distillation with noise-free differential privacy. In Proc. Thirtieth International Joint Conference on Artificial Intelligence (IJCAI), pages 1563–1570, Aug. 2021.
  48. LDP-FL: Practical private aggregation in federated learning with local differential privacy. In Proc. Thirtieth International Joint Conference on Artificial Intelligence (IJCAI), pages 1571–1578, Aug. 2021.
  49. Decentralized federated averaging. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
  50. Fedspeed: Larger local interval, less communication round, and higher generalization accuracy. In International Conference on Learning Representations.
  51. Differentially private learning with adaptive clipping. ArXiv, abs/1905.03871, 2019.
  52. User-level privacy-preserving federated learning: Analysis and performance optimization. IEEE Transactions on Mobile Computing, 21(9):3388–3401, 2022.
  53. Achieving linear speedup with partial worker participation in non-IID federated learning. In International Conference on Learning Representations, 2021.
  54. Opacus: User-friendly differential privacy library in pytorch. In Proc. Privacy in Machine Learning (PriML) Workshop, NeurIPS, Virtual, Dec. 2021.
  55. Fine-tuning global model via data-free knowledge distillation for non-iid federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10174–10183, 2022.
  56. Penalizing gradient norm for efficiently improving generalization in deep learning. In International Conference on Machine Learning, ICML, pages 26982–26992. PMLR, 2022.
  57. Improving sharpness-aware minimization with fisher mask for better generalization on language models. arXiv preprint arXiv:2210.05497, 2022.
  58. Voting-based approaches for differentially private federated learning. ArXiv, abs/2010.04851, 2020.
Citations (39)

Summary

We haven't generated a summary for this paper yet.