FheFL: Fully Homomorphic Encryption Friendly Privacy-Preserving Federated Learning with Byzantine Users (2306.05112v3)
Abstract: The federated learning (FL) technique was developed to mitigate data privacy issues in the traditional machine learning paradigm. While FL ensures that a user's data always remain with the user, the gradients are shared with the centralized server to build the global model. This results in privacy leakage, where the server can infer private information from the shared gradients. To mitigate this flaw, the next-generation FL architectures proposed encryption and anonymization techniques to protect the model updates from the server. However, this approach creates other challenges, such as malicious users sharing false gradients. Since the gradients are encrypted, the server is unable to identify rogue users. To mitigate both attacks, this paper proposes a novel FL algorithm based on a fully homomorphic encryption (FHE) scheme. We develop a distributed multi-key additive homomorphic encryption scheme that supports model aggregation in FL. We also develop a novel aggregation scheme within the encrypted domain, utilizing users' non-poisoning rates, to effectively address data poisoning attacks while ensuring privacy is preserved by the proposed encryption scheme. Rigorous security, privacy, convergence, and experimental analyses have been provided to show that FheFL is novel, secure, and private, and achieves comparable accuracy at reasonable computational cost.
- B. Hitaj, G. Ateniese, and F. Perez-Cruz, “Deep models under the gan: information leakage from collaborative deep learning,” in Proceedings of the 2017 ACM SIGSAC conference on computer and communications security, 2017, pp. 603–618.
- Y. Lu, X. Huang, Y. Dai, S. Maharjan, and Y. Zhang, “Differentially private asynchronous federated learning for mobile edge computing in urban informatics,” IEEE Transactions on Industrial Informatics, vol. 16, no. 3, pp. 2134–2143, 2019.
- Y. Zhao, J. Zhao, M. Yang, T. Wang, N. Wang, L. Lyu, D. Niyato, and K.-Y. Lam, “Local differential privacy-based federated learning for internet of things,” IEEE Internet of Things Journal, 2020.
- P. Blanchard, E. M. E. Mhamdi, R. Guerraoui, and J. Stainer, “Machine learning with adversaries: Byzantine tolerant gradient descent,” in Proc. Adv. Neural Inf. Process. Syst., vol. 30, Dec. 2017, pp. 1–15.
- Z. Ma, J. Ma, Y. Miao, Y. Li, and R. H. Deng, “ShieldFL: Mitigating model poisoning attacks in privacy-preserving federated learning,” IEEE Trans. Inf. Forensics Security, vol. 17, pp. 1639–1654, 2022.
- S. Shi, C. Hu, D. Wang, Y. Zhu, and Z. Han, “Federated anomaly analytics for local model poisoning attack,” IEEE J. Sel. Areas Commun., vol. 40, no. 2, pp. 596–610, Feb. 2022.
- X. Cao, Z. Zhang, J. Jia, and N. Z. Gong, “FLCert: Provably secure federated learning against poisoning attacks,” IEEE Trans. Inf. Forensics Security, vol. 17, pp. 3691–3705, 2022.
- A. Beimel, “Secret-sharing schemes: A survey,” in International Conference on Coding and Cryptology. Springer, 2011, pp. 11–46.
- Yogachandran Rahulamathavan (9 papers)
- Charuka Herath (3 papers)
- Xiaolan Liu (16 papers)
- Sangarapillai Lambotharan (20 papers)
- Carsten Maple (65 papers)