Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PROFL: A Privacy-Preserving Federated Learning Method with Stringent Defense Against Poisoning Attacks (2312.01045v1)

Published 2 Dec 2023 in cs.CR, cs.AI, and cs.LG

Abstract: Federated Learning (FL) faces two major issues: privacy leakage and poisoning attacks, which may seriously undermine the reliability and security of the system. Overcoming them simultaneously poses a great challenge. This is because privacy protection policies prohibit access to users' local gradients to avoid privacy leakage, while Byzantine-robust methods necessitate access to these gradients to defend against poisoning attacks. To address these problems, we propose a novel privacy-preserving Byzantine-robust FL framework PROFL. PROFL is based on the two-trapdoor additional homomorphic encryption algorithm and blinding techniques to ensure the data privacy of the entire FL process. During the defense process, PROFL first utilize secure Multi-Krum algorithm to remove malicious gradients at the user level. Then, according to the Pauta criterion, we innovatively propose a statistic-based privacy-preserving defense algorithm to eliminate outlier interference at the feature level and resist impersonation poisoning attacks with stronger concealment. Detailed theoretical analysis proves the security and efficiency of the proposed method. We conducted extensive experiments on two benchmark datasets, and PROFL improved accuracy by 39% to 75% across different attack settings compared to similar privacy-preserving robust methods, demonstrating its significant advantage in robustness.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (16)
  1. I. Dayan, H. R. Roth, et al., “Federated learning for predicting clinical outcomes in patients with covid-19,” Nature medicine, vol. 27, no. 10, pp. 1735–1743, 2021.
  2. W. Zheng, L. Yan, et al., “Federated meta-learning for fraudulent credit card detection,” in IJCAI, pp. 4654–4660, 2020.
  3. L. Zhu, Z. Liu, and S. Han, “Deep leakage from gradients,” in NeurIPS, pp. 14747–14756, 2019.
  4. P. Blanchard, E. M. E. Mhamdi, et al., “Machine learning with adversaries: Byzantine tolerant gradient descent,” in NIPS, pp. 119–129, 2017.
  5. C. Xie, K. Huang, P. Chen, et al., “DBA: distributed backdoor attacks against federated learning,” in ICLR, 2020.
  6. G. Baruch, M. Baruch, and Y. Goldberg, “A little is enough: Circumventing defenses for distributed learning,” in NeurIPS, pp. 8632–8642, 2019.
  7. X. Liu, H. Li, et al., “Privacy-enhanced federated learning against poisoning adversaries,” IEEE Trans. Inf. Forensics Secur., vol. 16, pp. 4574–4588, 2021.
  8. Z. Ma, J. Ma, et al., “Shieldfl: Mitigating model poisoning attacks in privacy-preserving federated learning,” IEEE Trans. Inf. Forensics Secur., vol. 17, pp. 1639–1654, 2022.
  9. X. Liu, R. H. Deng, et al., “An efficient privacy-preserving outsourced calculation toolkit with multiple keys,” IEEE Trans. Inf. Forensics Secur., vol. 11, no. 11, pp. 2401–2414, 2016.
  10. P. Mohassel and Y. Zhang, “Secureml: A system for scalable privacy-preserving machine learning,” in 2017 IEEE symposium on security and privacy (SP), pp. 19–38, IEEE, 2017.
  11. P. Paillier, “Public-key cryptosystems based on composite degree residuosity classes,” in EUROCRYPT, pp. 223–238, Springer, 1999.
  12. P. Paillier and D. Pointcheval, “Efficient public-key cryptosystems provably secure against active adversaries,” in ASIACRYPT, pp. 165–179, 1999.
  13. D. Yin, Y. Chen, R. Kannan, and P. Bartlett, “Byzantine-robust distributed learning: Towards optimal statistical rates,” in International Conference on Machine Learning, pp. 5650–5659, PMLR, 2018.
  14. B. McMahan, E. Moore, et al., “Communication-efficient learning of deep networks from decentralized data,” in Artificial intelligence and statistics, pp. 1273–1282, PMLR, 2017.
  15. D. Catalano, R. Gennaro, and N. Howgrave-Graham, “The bit security of paillier’s encryption scheme and its applications,” in EUROCRYPT, pp. 229–243, Springer, 2001.
  16. E. Bagdasaryan, A. Veit, et al., “How to backdoor federated learning,” in International Conference on Artificial Intelligence and Statistics, pp. 2938–2948, PMLR, 2020.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Yisheng Zhong (13 papers)
  2. Li-Ping Wang (7 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets