Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FheFL: Fully Homomorphic Encryption Friendly Privacy-Preserving Federated Learning with Byzantine Users (2306.05112v3)

Published 8 Jun 2023 in cs.AI and cs.CR

Abstract: The federated learning (FL) technique was developed to mitigate data privacy issues in the traditional machine learning paradigm. While FL ensures that a user's data always remain with the user, the gradients are shared with the centralized server to build the global model. This results in privacy leakage, where the server can infer private information from the shared gradients. To mitigate this flaw, the next-generation FL architectures proposed encryption and anonymization techniques to protect the model updates from the server. However, this approach creates other challenges, such as malicious users sharing false gradients. Since the gradients are encrypted, the server is unable to identify rogue users. To mitigate both attacks, this paper proposes a novel FL algorithm based on a fully homomorphic encryption (FHE) scheme. We develop a distributed multi-key additive homomorphic encryption scheme that supports model aggregation in FL. We also develop a novel aggregation scheme within the encrypted domain, utilizing users' non-poisoning rates, to effectively address data poisoning attacks while ensuring privacy is preserved by the proposed encryption scheme. Rigorous security, privacy, convergence, and experimental analyses have been provided to show that FheFL is novel, secure, and private, and achieves comparable accuracy at reasonable computational cost.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (8)
  1. B. Hitaj, G. Ateniese, and F. Perez-Cruz, “Deep models under the gan: information leakage from collaborative deep learning,” in Proceedings of the 2017 ACM SIGSAC conference on computer and communications security, 2017, pp. 603–618.
  2. Y. Lu, X. Huang, Y. Dai, S. Maharjan, and Y. Zhang, “Differentially private asynchronous federated learning for mobile edge computing in urban informatics,” IEEE Transactions on Industrial Informatics, vol. 16, no. 3, pp. 2134–2143, 2019.
  3. Y. Zhao, J. Zhao, M. Yang, T. Wang, N. Wang, L. Lyu, D. Niyato, and K.-Y. Lam, “Local differential privacy-based federated learning for internet of things,” IEEE Internet of Things Journal, 2020.
  4. P. Blanchard, E. M. E. Mhamdi, R. Guerraoui, and J. Stainer, “Machine learning with adversaries: Byzantine tolerant gradient descent,” in Proc. Adv. Neural Inf. Process. Syst., vol. 30, Dec. 2017, pp. 1–15.
  5. Z. Ma, J. Ma, Y. Miao, Y. Li, and R. H. Deng, “ShieldFL: Mitigating model poisoning attacks in privacy-preserving federated learning,” IEEE Trans. Inf. Forensics Security, vol. 17, pp. 1639–1654, 2022.
  6. S. Shi, C. Hu, D. Wang, Y. Zhu, and Z. Han, “Federated anomaly analytics for local model poisoning attack,” IEEE J. Sel. Areas Commun., vol. 40, no. 2, pp. 596–610, Feb. 2022.
  7. X. Cao, Z. Zhang, J. Jia, and N. Z. Gong, “FLCert: Provably secure federated learning against poisoning attacks,” IEEE Trans. Inf. Forensics Security, vol. 17, pp. 3691–3705, 2022.
  8. A. Beimel, “Secret-sharing schemes: A survey,” in International Conference on Coding and Cryptology. Springer, 2011, pp. 11–46.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yogachandran Rahulamathavan (9 papers)
  2. Charuka Herath (3 papers)
  3. Xiaolan Liu (16 papers)
  4. Sangarapillai Lambotharan (20 papers)
  5. Carsten Maple (65 papers)
Citations (10)
X Twitter Logo Streamline Icon: https://streamlinehq.com