Beyond Traditional Threats: A Persistent Backdoor Attack on Federated Learning (2404.17617v1)
Abstract: Backdoors on federated learning will be diluted by subsequent benign updates. This is reflected in the significant reduction of attack success rate as iterations increase, ultimately failing. We use a new metric to quantify the degree of this weakened backdoor effect, called attack persistence. Given that research to improve this performance has not been widely noted,we propose a Full Combination Backdoor Attack (FCBA) method. It aggregates more combined trigger information for a more complete backdoor pattern in the global model. Trained backdoored global model is more resilient to benign updates, leading to a higher attack success rate on the test set. We test on three datasets and evaluate with two models across various settings. FCBA's persistence outperforms SOTA federated learning backdoor attacks. On GTSRB, postattack 120 rounds, our attack success rate rose over 50% from baseline. The core code of our method is available at https://github.com/PhD-TaoLiu/FCBA.
- Baffle: Backdoor detection via feedback-based federated learning. In 2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS), 852–863. IEEE.
- How To Backdoor Federated Learning. In Chiappa, S.; and Calandra, R., eds., Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, volume 108 of Proceedings of Machine Learning Research, 2938–2948. PMLR.
- Analyzing federated learning through an adversarial lens. In International Conference on Machine Learning, 634–643. PMLR.
- Machine learning with adversaries: Byzantine tolerant gradient descent. Advances in neural information processing systems, 30.
- Towards federated learning at scale: System design. Proceedings of machine learning and systems, 1: 374–388.
- Detecting backdoor attacks on deep neural networks by activation clustering. arXiv preprint arXiv:1811.03728.
- Distributed statistical machine learning in adversarial settings: Byzantine gradient descent. Proceedings of the ACM on Measurement and Analysis of Computing Systems, 1(2): 1–25.
- Internet of Things security and forensics: Challenges and opportunities.
- Asynchronous Byzantine machine learning (the case of SGD). In International Conference on Machine Learning, 1145–1154. PMLR.
- Strip: A defence against trojan attacks on deep neural networks. In Proceedings of the 35th Annual Computer Security Applications Conference, 113–125.
- Differentially private federated learning: A client level perspective. arXiv preprint arXiv:1712.07557.
- Detection of traffic signs in real-world images: The German Traffic Sign Detection Benchmark. In The 2013 international joint conference on neural networks (IJCNN), 1–8. Ieee.
- Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, 14(1–2): 1–210.
- Measuring catastrophic forgetting in neural networks. In Proceedings of the AAAI conference on artificial intelligence, volume 32.
- Knuth, D. E. 1997. The art of computer programming, volume 3. Pearson Education.
- Learning multiple layers of features from tiny images.
- Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11): 2278–2324.
- On the Permanence of Backdoors in Evolving Models. arXiv:2206.04677.
- Multi-Task Learning for Recommendation Over Heterogeneous Information Network. IEEE Transactions on Knowledge and Data Engineering, 34(2): 789–802.
- Fine-pruning: Defending against backdooring attacks on deep neural networks. In International symposium on research in attacks, intrusions, and defenses, 273–294. Springer.
- Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, 1273–1282. PMLR.
- Learning differentially private recurrent language models. arXiv preprint arXiv:1710.06963.
- Minka, T. 2000. Estimating a Dirichlet distribution.
- Newton, I. 1732. Arithmetica universalis: sive de compositione et resolutione arithmetica liber. Apud Joh. Et Herm. Verbeek, Bibliopolae.
- Flguard: Secure and private federated learning. Crytography and Security, (Preprint).
- Hibernated Backdoor: A Mutual Information Empowered Backdoor Attack to Deep Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 36(9): 10309–10318.
- Online learning with adversarial delays. Advances in neural information processing systems, 28.
- Shannon, C. E. 1949. Communication theory of secrecy systems. The Bell system technical journal, 28(4): 656–715.
- Can you really backdoor federated learning? arXiv preprint arXiv:1911.07963.
- Online learning with gated linear networks. arXiv preprint arXiv:1712.01897.
- Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In 2019 IEEE Symposium on Security and Privacy (SP), 707–723. IEEE.
- HFENet: hierarchical feature extraction network for accurate landcover classification. Remote Sensing, 14(17): 4244.
- Attack of the tails: Yes, you really can backdoor federated learning. Advances in Neural Information Processing Systems, 33: 16070–16084.
- Dispersed Pixel Perturbation-Based Imperceptible Backdoor Trigger for Image Classifier Models. IEEE Transactions on Information Forensics and Security, 17: 3091–3106.
- Federated variance-reduced stochastic gradient descent with robustness to byzantine attacks. IEEE Transactions on Signal Processing, 68: 4583–4596.
- DBA: Distributed Backdoor Attacks against Federated Learning. In International Conference on Learning Representations.
- Zeno: Byzantine-suspicious stochastic gradient descent. arXiv preprint arXiv:1805.10032, 24.
- Byzantine-robust distributed learning: Towards optimal statistical rates. In International Conference on Machine Learning, 5650–5659. PMLR.
- Federated learning with non-iid data. arXiv preprint arXiv:1806.00582.