Papers
Topics
Authors
Recent
AI Research Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 89 tok/s
Gemini 2.5 Pro 43 tok/s Pro
GPT-5 Medium 24 tok/s Pro
GPT-5 High 24 tok/s Pro
GPT-4o 112 tok/s Pro
Kimi K2 199 tok/s Pro
GPT OSS 120B 449 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Towards Understanding Adversarial Transferability in Federated Learning (2310.00616v2)

Published 1 Oct 2023 in cs.LG and cs.CV

Abstract: We investigate a specific security risk in FL: a group of malicious clients has impacted the model during training by disguising their identities and acting as benign clients but later switching to an adversarial role. They use their data, which was part of the training set, to train a substitute model and conduct transferable adversarial attacks against the federated model. This type of attack is subtle and hard to detect because these clients initially appear to be benign. The key question we address is: How robust is the FL system to such covert attacks, especially compared to traditional centralized learning systems? We empirically show that the proposed attack imposes a high security risk to current FL systems. By using only 3\% of the client's data, we achieve the highest attack rate of over 80\%. To further offer a full understanding of the challenges the FL system faces in transferable attacks, we provide a comprehensive analysis over the transfer robustness of FL across a spectrum of configurations. Surprisingly, FL systems show a higher level of robustness than their centralized counterparts, especially when both systems are equally good at handling regular, non-malicious data. We attribute this increased robustness to two main factors: 1) Decentralized Data Training: Each client trains the model on its own data, reducing the overall impact of any single malicious client. 2) Model Update Averaging: The updates from each client are averaged together, further diluting any malicious alterations. Both practical experiments and theoretical analysis support our conclusions. This research not only sheds light on the resilience of FL systems against hidden attacks but also raises important considerations for their future application and development.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (51)
  1. Federated learning based on dynamic regularization. arXiv preprint arXiv:2111.04263, 2021.
  2. How to backdoor federated learning. In International Conference on Artificial Intelligence and Statistics, pages 2938–2948. PMLR, 2020.
  3. Analyzing federated learning through an adversarial lens. In International Conference on Machine Learning, pages 634–643. PMLR, 2019.
  4. Evasion attacks against machine learning at test time. In Joint European conference on machine learning and knowledge discovery in databases, pages 387–402. Springer, 2013.
  5. Poisoning attacks against support vector machines. arXiv preprint arXiv:1206.6389, 2012.
  6. Machine learning with adversaries: Byzantine tolerant gradient descent. Advances in Neural Information Processing Systems, 30, 2017.
  7. Improving generalization in federated learning by seeking flat minima. arXiv preprint arXiv:2203.11834, 2022.
  8. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526, 2017.
  9. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In International conference on machine learning, pages 2206–2216. PMLR, 2020.
  10. Autoaugment: Learning augmentation policies from data. arXiv preprint arXiv:1805.09501, 2018.
  11. Why do adversarial attacks transfer? explaining transferability of evasion and poisoning attacks. Usenix Security Symposium, 2018.
  12. Why do adversarial attacks transfer? explaining transferability of evasion and poisoning attacks. In 28th USENIX security symposium (USENIX security 19), pages 321–338, 2019.
  13. Local model poisoning attacks to {{\{{Byzantine-Robust}}\}} federated learning. In 29th USENIX Security Symposium (USENIX Security 20), pages 1605–1622, 2020.
  14. Mitigating sybils in federated learning poisoning. arXiv preprint arXiv:1808.04866, 2018.
  15. Inverting gradients-how easy is it to break privacy in federated learning? Advances in Neural Information Processing Systems, 33:16937–16947, 2020.
  16. Neural networks and the bias/variance dilemma. Neural computation, 4(1):1–58, 1992.
  17. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
  18. The hidden vulnerability of distributed learning in byzantium. In International Conference on Machine Learning, pages 3521–3530. PMLR, 2018.
  19. Federated learning for mobile keyboard prediction. arXiv preprint arXiv:1811.03604, 2018.
  20. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  21. Federated robustness propagation: Sharing adversarial robustness in federated learning. arXiv preprint arXiv:2106.10196, 2021.
  22. Adversarial machine learning. In Proceedings of the 4th ACM workshop on Security and artificial intelligence, pages 43–58, 2011.
  23. Adversarial examples are not bugs, they are features. Advances in neural information processing systems, 32, 2019.
  24. Outlier detection for time series with recurrent autoencoder ensembles. In IJCAI, pages 2725–2732, 2019.
  25. Learning to detect malicious clients for robust federated learning. arXiv preprint arXiv:2002.00211, 2020.
  26. On the convergence of fedavg on non-iid data. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020.
  27. Free-riders in federated learning: Attacks and defenses. arXiv preprint arXiv:1911.12560, 2019.
  28. Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770, 2016.
  29. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
  30. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273–1282. PMLR, 2017.
  31. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2574–2582, 2016.
  32. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security, pages 506–519, 2017.
  33. Robust federated learning: The case of affine distribution shifts. Advances in Neural Information Processing Systems, 33:21554–21565, 2020.
  34. Manipulating the byzantine: Optimizing model poisoning attacks and defenses for federated learning. In NDSS, 2021.
  35. Auror: Defending against poisoning attacks in collaborative deep learning systems. In Proceedings of the 32nd Annual Conference on Computer Security Applications, pages 508–519, 2016.
  36. Can you really backdoor federated learning? arXiv preprint arXiv:1911.07963, 2019.
  37. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
  38. The space of transferable adversarial examples. arXiv preprint arXiv:1704.03453, 2017.
  39. Federated learning with matched averaging. arXiv preprint arXiv:2002.06440, 2020.
  40. Closer look at the transferability of adversarial examples: How they fool different models differently. arXiv preprint arXiv:2112.14337, 2021.
  41. Skip connections matter: On the transferability of adversarial examples generated with resnets. arXiv preprint arXiv:2002.05990, 2020.
  42. Boosting the transferability of adversarial samples via attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1161–1170, 2020.
  43. Dba: Distributed backdoor attacks against federated learning. In International Conference on Learning Representations, 2019.
  44. Byzantine-robust distributed learning: Towards optimal statistical rates. In International Conference on Machine Learning, pages 5650–5659. PMLR, 2018.
  45. Bayesian nonparametric federated learning of neural networks. In International Conference on Machine Learning, pages 7252–7261. PMLR, 2019.
  46. Curse or redemption? how data heterogeneity affects the robustness of federated learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 10807–10814, 2021.
  47. Fedlab: A flexible federated learning framework. arXiv preprint arXiv:2107.11621, 2021.
  48. Transferable adversarial perturbations. In Proceedings of the European Conference on Computer Vision (ECCV), pages 452–467, 2018.
  49. Adversarial robustness through bias variance decomposition: A new perspective for federated learning. arXiv preprint arXiv:2009.09026, 2020.
  50. Deep leakage from gradients. Advances in neural information processing systems, 32, 2019.
  51. Fat: Federated adversarial training. arXiv preprint arXiv:2012.01791, 2020.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube