Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Enhancing Security in Federated Learning through Adaptive Consensus-Based Model Update Validation (2403.04803v1)

Published 5 Mar 2024 in cs.CR, cs.AI, cs.DC, and cs.LG

Abstract: This paper introduces an advanced approach for fortifying Federated Learning (FL) systems against label-flipping attacks. We propose a simplified consensus-based verification process integrated with an adaptive thresholding mechanism. This dynamic thresholding is designed to adjust based on the evolving landscape of model updates, offering a refined layer of anomaly detection that aligns with the real-time needs of distributed learning environments. Our method necessitates a majority consensus among participating clients to validate updates, ensuring that only vetted and consensual modifications are applied to the global model. The efficacy of our approach is validated through experiments on two benchmark datasets in deep learning, CIFAR-10 and MNIST. Our results indicate a significant mitigation of label-flipping attacks, bolstering the FL system's resilience. This method transcends conventional techniques that depend on anomaly detection or statistical validation by incorporating a verification layer reminiscent of blockchain's participatory validation without the associated cryptographic overhead. The innovation of our approach rests in striking an optimal balance between heightened security measures and the inherent limitations of FL systems, such as computational efficiency and data privacy. Implementing a consensus mechanism specifically tailored for FL environments paves the way for more secure, robust, and trustworthy distributed machine learning applications, where safeguarding data integrity and model robustness is critical.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (31)
  1. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial Intelligence and Statistics, 2017, pp. 1273–1282.
  2. J. Konečn‘y, H. B. McMahan, F. X. Yu, P. Richt’arik, A. T. Suresh, and D. Bacon, “Federated learning: Strategies for improving communication efficiency,” in arXiv preprint arXiv:1610.05492, 2016.
  3. M. J. Sheller, G. A. Reina, B. Edwards, J. Martin, and S. Bakas, “Multi-institutional deep learning modeling without sharing patient data: A feasibility study on brain tumor segmentation,” in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries.   Springer, Cham, 2019, pp. 92–104.
  4. V. Tolpegin, S. Truex, M. E. Gursoy, and L. Liu, “Data poisoning attacks against federated learning systems,” in European Symposium on Research in Computer Security.   Springer, Cham, 2020, pp. 480–501.
  5. A. N. Bhagoji, S. Chakraborty, P. Mittal, and S. Calo, “Analyzing federated learning through an adversarial lens,” in International Conference on Machine Learning.   PMLR, 2019, pp. 634–643.
  6. C. Xie, K. Huang, P. Chen, and B. Li, “Dba: Distributed backdoor attacks against federated learning,” arXiv preprint arXiv:2011.02317, 2020.
  7. C. Fung, C. J. M. Yoon, and I. Beschastnikh, “Mitigating sybils in federated learning poisoning,” in 2018 IEEE International Conference on Data Mining Workshops (ICDMW).   IEEE, 2018, pp. 768–775.
  8. K. Pillutla, S. M. Kakade, and Z. Harchaoui, “Robust aggregation for federated learning,” arXiv preprint arXiv:1912.13445, 2019.
  9. L. Lyu, H. Yu, and Q. Yang, “Threats to federated learning: A survey,” arXiv preprint arXiv:2003.02133, 2020.
  10. S. Nakamoto, “Bitcoin: A peer-to-peer electronic cash system,” Decentralized Business Review, vol. 21260, 2008.
  11. A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” Technical report, University of Toronto, vol. 1, no. 4, p. 7, 2009.
  12. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
  13. T. Zhang, F. Zhu, and J. Li, “Double trouble: Multi-classifier adversarial attacks against federated learning,” in 2020 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm).   IEEE, 2020, pp. 1–7.
  14. C. Ma, L. Gong, X. Zhang, J. Sun, and Q. Yang, “Security defense for community-based anomaly detection in industrial federated learning,” IEEE Transactions on Industrial Informatics, vol. 16, no. 9, pp. 6268–6277, 2019.
  15. H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” arXiv preprint arXiv:1602.05629, 2017.
  16. J. Konečný, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon, “Federated learning: Strategies for improving communication efficiency,” in NIPS Workshop on Private Multi-Party Machine Learning, 2016.
  17. E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, and V. Shmatikov, “How to backdoor federated learning,” in International Conference on Artificial Intelligence and Statistics, 2020, pp. 2938–2948.
  18. A. N. Bhagoji, S. Chakraborty, P. Mittal, and S. Calo, “Analyzing federated learning through an adversarial lens,” in International Conference on Machine Learning, 2019, pp. 634–643.
  19. Y. Lu, X. Huang, Y. Dai, S. Maharjan, and Y. Zhang, “Blockchain and federated learning for privacy-preserved data sharing in industrial iot,” IEEE Transactions on Industrial Informatics, vol. 16, no. 6, pp. 4177–4186, 2020.
  20. K. Gai, Y. Wu, L. Zhu, L. Xu, and Y. Zhang, “Blockchain for intelligent manufacturing: A case study of a smart product lifecycle framework,” IEEE Transactions on Industrial Informatics, vol. 15, no. 6, pp. 3632–3641, 2019.
  21. Q. Yin, S. Wu, and G. Wang, “A robust label verification algorithm for correcting mislabeled training data,” IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 11, pp. 5671–5677, 2018.
  22. R. Zhang and J. T. Kwok, “A consensus-based distributed algorithm for training deep neural networks,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2019, pp. 2937–2941.
  23. I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” in International Conference on Learning Representations, 2014.
  24. P. Samangouei, M. Kabkab, and R. Chellappa, “Defense-gan: Protecting classifiers against adversarial attacks using generative models,” in Proceedings of the Sixth International Conference on Learning Representations (ICLR), ser. ICLR ’18, 2018. [Online]. Available: https://doi.org/10.48550/arXiv.1805.06605
  25. Y. Xiao, F. Codevilla, A. Gurram, R. Urtasun, and J. Kautz, “Benchmarking robustness in object detection: Autonomous driving when winter is coming,” in European Conference on Computer Vision.   Springer, 2020, pp. 124–140.
  26. G. Smith, C. Dwork, and K. Chaudhuri, “Federated learning for mobile keyboard prediction,” in Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, 2021, pp. 2318–2342.
  27. T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, “Federated learning: Challenges, methods, and future directions,” IEEE Signal Processing Magazine, vol. 37, no. 3, pp. 50–60, 2020.
  28. K. Wei, J. Li, M. Ding, C. Ma, H. H. Yang, F. Farokhi, S. Jin, T. Q. Quek, and H. V. Poor, “Federated learning with differential privacy: Algorithms and performance analysis,” IEEE Transactions on Information Forensics and Security, vol. 16, pp. 3454–3469, 2021.
  29. V. Tolpegin, S. Truex, M. E. Gursoy, and L. Liu, “Data poisoning attacks against federated learning systems,” arXiv preprint arXiv:2007.08432, 2020. [Online]. Available: https://doi.org/10.48550/arXiv.2007.08432
  30. L. Liu, J. Zhang, S. H. Song, and K. B. Letaief, “Client-edge-cloud hierarchical federated learning,” in 2020 IEEE International Conference on Communications (ICC), Dublin, Ireland, 2020, pp. 1–6.
  31. Y. Sun, C.-M. Kuo, J. Ho, P. Ting, B.-Y. Yang, Y.-H. Lin, and S.-D. Lai, “Can you really anonymize the participants of a protocol?” in 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS).   IEEE, 2019, pp. 1290–1301.
Citations (1)

Summary

We haven't generated a summary for this paper yet.