Model Poisoning Attacks to Federated Learning via Multi-Round Consistency (2404.15611v2)
Abstract: Model poisoning attacks are critical security threats to Federated Learning (FL). Existing model poisoning attacks suffer from two key limitations: 1) they achieve suboptimal effectiveness when defenses are deployed, and/or 2) they require knowledge of the model updates or local training data on genuine clients. In this work, we make a key observation that their suboptimal effectiveness arises from only leveraging model-update consistency among malicious clients within individual training rounds, making the attack effect self-cancel across training rounds. In light of this observation, we propose PoisonedFL, which enforces multi-round consistency among the malicious clients' model updates while not requiring any knowledge about the genuine clients. Our empirical evaluation on five benchmark datasets shows that PoisonedFL breaks eight state-of-the-art defenses and outperforms seven existing model poisoning attacks. Moreover, we also explore new defenses that are tailored to PoisonedFL, but our results show that we can still adapt PoisonedFL to break them. Our study shows that FL systems are considerably less robust than previously thought, underlining the urgency for the development of new defense mechanisms.
- [n. d.]. Android-x86 Run Android on your PC. https://www.android-x86.org/.
- [n. d.]. Federated Learning: Collaborative Machine Learning without Centralized Training Data. https://ai.googleblog.com/2017/04/federated-learning-collaborative.html.
- [n. d.]. Machine Learning Ledger Orchestration For Drug Discovery (MELLODDY). https://www.melloddy.eu/.
- [n. d.]. NoxPlayer, the perfect Android emulator to play mobile games on PC. https://www.bignox.com/.
- [n. d.]. Utilization of FATE in Risk Management of Credit in Small and Micro Enterprises. https://www.fedai.org/cases/utilization-of-fate-in-risk-management-of-credit-in-small-and-micro-enterprises/.
- [n. d.]. The world’s first cloud-based Android gaming platform. https://www.bluestacks.com/.
- Last accessed April, 2021. Acquire Valued Shoppers Challenge at Kaggle. https://www.kaggle.com/c/acquire-valued-shoppers-challenge/data.
- How to backdoor federated learning. In AISTATS.
- Can machine learning be secure?. In ASIACCS.
- A little is enough: Circumventing defenses for distributed learning. In NeurIPS.
- Machine learning with adversaries: Byzantine tolerant gradient descent. In NeurIPS.
- Leaf: A benchmark for federated settings. arXiv preprint arXiv:1812.01097 (2018).
- Fltrust: Byzantine-robust federated learning via trust bootstrapping. In NDSS.
- Xiaoyu Cao and Neil Zhenqiang Gong. 2022. Mpaf: Model poisoning attacks to federated learning based on fake clients. In CVPR Workshops.
- Provably secure federated learning against malicious clients. In AAAI.
- FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information. In IEEE Symposium on Security and Privacy.
- FLCert: Provably Secure Federated Learning Against Poisoning Attacks. In IEEE Transactions on Information Forensics and Security.
- Local model poisoning attacks to Byzantine-Robust federated learning. In USENIX Security Symposium.
- Aflguard: Byzantine-robust asynchronous federated learning. In ACSAC.
- Model inversion attacks that exploit confidence information and basic countermeasures. In CCS.
- Advances and open problems in federated learning. Foundations and Trends® in Machine Learning (2021).
- Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492 (2016).
- Learning multiple layers of features from tiny images. (2009).
- MNIST handwritten digit database. Available: http://yann. lecun. com/exdb/mnist (1998).
- Learning to detect malicious clients for robust federated learning. arXiv preprint arXiv:2002.00211 (2020).
- Communication-efficient learning of deep networks from decentralized data. In AISTATS.
- The hidden vulnerability of distributed learning in byzantium. In ICML.
- FLAME: Taming backdoors in federated learning. In USENIX Security Symposium.
- A study of Gaussian mixture models of color and texture features for image classification and segmentation. In Pattern recognition.
- Dynafed: Tackling client data heterogeneity with global dynamics. In CVPR.
- The future of digital health with federated learning. In NPJ digital medicine.
- Virat Shejwalkar and Amir Houmansadr. 2021. Manipulating the byzantine: Optimizing model poisoning attacks and defenses for federated learning. In NDSS.
- Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Production Federated Learning. In IEEE Symposium on Security and Privacy.
- Can you really backdoor federated learning? arXiv preprint arXiv:1911.07963 (2019).
- Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms. arXiv preprint arXiv:1708.07747 (2017).
- Crfl: Certifiably robust federated learning against backdoor attacks. In ICML.
- Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks. In The Web Conference.
- Federated machine learning: Concept and applications. In ACM Transactions on Intelligent Systems and Technology.
- Byzantine-robust distributed learning: Towards optimal statistical rates. In ICML.
- FLDetector: Defending federated learning against model poisoning attacks via detecting malicious clients. In KDD.
- Yueqi Xie (22 papers)
- Minghong Fang (34 papers)
- Neil Zhenqiang Gong (117 papers)