Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FedMID: A Data-Free Method for Using Intermediate Outputs as a Defense Mechanism Against Poisoning Attacks in Federated Learning (2404.11905v1)

Published 18 Apr 2024 in cs.LG and cs.CR

Abstract: Federated learning combines local updates from clients to produce a global model, which is susceptible to poisoning attacks. Most previous defense strategies relied on vectors derived from projections of local updates on a Euclidean space; however, these methods fail to accurately represent the functionality and structure of local models, resulting in inconsistent performance. Here, we present a new paradigm to defend against poisoning attacks in federated learning using functional mappings of local models based on intermediate outputs. Experiments show that our mechanism is robust under a broad range of computing conditions and advanced attack scenarios, enabling safer collaboration among data-sensitive participants via federated learning.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (49)
  1. How to backdoor federated learning. In Proc. of AISTATS, pages 2938–2948, 2020.
  2. A little is enough: Circumventing defenses for distributed learning. In Advances of NeurIPS, volume 32, 2019.
  3. Distill on the go: Online knowledge distillation in self-supervised learning. In Proc. of CVPR, pages 2678–2687, 2021.
  4. Machine learning with adversaries: Byzantine tolerant gradient descent. In Advances of NeurIPS, volume 30, 2017.
  5. Towards federated learning at scale: System design. In Proc. of MLSys, volume 1, pages 374–388, 2019.
  6. Expanding the reach of federated learning by reducing client resource requirements. arXiv preprint arXiv:1812.07210, 2018.
  7. Fltrust: Byzantine-robust federated learning via trust bootstrapping. In Proc. of NDSS, 2021.
  8. Local model poisoning attacks to {{\{{Byzantine-Robust}}\}} federated learning. In Proc. of USENIX Security, pages 1605–1622, 2020.
  9. Attack-resistant federated learning with residual-based reweighting. arXiv preprint arXiv:1912.11464, 2019.
  10. The limitations of federated learning in sybil settings. In Proc. of RAID, 2020.
  11. Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733, 2017.
  12. FedBoost: A communication-efficient algorithm for federated learning. In Hal Daumé III and Aarti Singh, editors, Proc. of ICML, volume 119 of Proceedings of Machine Learning Research, pages 3973–3983. PMLR, 13–18 Jul 2020.
  13. Fedx: Unsupervised federated learning with cross knowledge distillation. In Proc. of ECCV, pages 691–707, 2022.
  14. Towards attack-tolerant federated learning via critical parameter analysis. In Proc. of the IEEE/CVF International Conference on Computer Vision, pages 4999–5008, 2023.
  15. Deep residual learning for image recognition. In Proc. of CVPR, pages 770–778, 2016.
  16. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2(7), 2015.
  17. FjORD: Fair and accurate federated learning under heterogeneous targets with ordered dropout. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, editors, Advances of NeurIPS, 2021.
  18. Scaffold: Stochastic controlled averaging for federated learning. In Proc. of ICML, pages 5132–5143, 2020.
  19. Byzantine-robust learning on heterogeneous datasets via bucketing. In Proc. of ICLR, 2022.
  20. Self-knowledge distillation with progressive refinement of targets. In Proc. of ICCV, pages 6567–6576, 2021.
  21. Federated learning: Strategies for improving communication efficiency. In Advances of NeurIPS Workshop on Private Multi-Party Machine Learning, 2016.
  22. Learning multiple layers of features from tiny images. 2009.
  23. Ya Le and Xuan Yang. Tiny imagenet visual recognition challenge. Stanford CS 231N, 7(7):3, 2015.
  24. Layer-wise adaptive model aggregation for scalable federated learning. Proc. of AAAI, 2023.
  25. Federated optimization in heterogeneous networks. Proc. of Machine Learning and Systems, 2:429–450, 2020.
  26. Model-contrastive federated learning. In Proc. of CVPR, pages 10713–10722, 2021.
  27. Fedbn: Federated learning on non-iid features via local batch normalization. In Proc. of ICLR, 2021.
  28. Threats to federated learning: A survey. arXiv preprint arXiv:2003.02133, 2020.
  29. Communication-efficient learning of deep networks from decentralized data. In Proc. of AISTATS, pages 1273–1282, 2017.
  30. Reading digits in natural images with unsupervised feature learning. 2011.
  31. Knowledge sharing via domain adaptation in customs fraud detection. In Proc. of AAAI, volume 36, pages 12062–12070, 2022.
  32. Feddefender: Client-side attack-tolerant federated learning. In Proc. of SIGKDD, pages 1850–1861, 2023.
  33. Robust aggregation for federated learning. IEEE Transactions on Signal Processing, 70:1142–1154, 2022.
  34. Discovering and overcoming limitations of noise-engineered data-free knowledge distillation. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances of NeurIPS, volume 35, pages 4902–4912. Curran Associates, Inc., 2022.
  35. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proc. of ICCV, pages 618–626, 2017.
  36. Poison frogs! targeted clean-label poisoning attacks on neural networks. In Advances of NeurIPS, volume 31, 2018.
  37. Manipulating the byzantine: Optimizing model poisoning attacks and defenses for federated learning. In Proc. of NDSS Symposium, 2021.
  38. Certified defenses for data poisoning attacks. In Advances of NeurIPS, volume 30, 2017.
  39. Can you really backdoor federated learning? arXiv preprint arXiv:1911.07963, 2019.
  40. Paul Voigt and Axel Von dem Bussche. The EU General Data Protection Regulation (GDPR): A Practical Guide. Springer, 2017.
  41. Federated learning with matched averaging. In Proc. of ICLR, 2020.
  42. Defense strategies toward model poisoning attacks in federated learning: A survey. In IEEE Wireless Communications and Networking Conference (WCNC), pages 548–553, 2022.
  43. Generalized Byzantine-tolerant SGD. arXiv preprint arXiv:1802.10116, 2018.
  44. Dba: Distributed backdoor attacks against federated learning. In Proc. of ICLR, 2020.
  45. Mutual contrastive learning for visual representation learning. In Proc. of AAAI, volume 36, pages 3045–3053, 2022.
  46. Personalized federated learning with inferred collaboration graphs. In Proc. of ICML, pages 39801–39817, 2023.
  47. Feddisco: Federated learning with discrepancy-aware collaboration. In Proc. of ICML, pages 39879–39902, 2023.
  48. Byzantine-robust distributed learning: Towards optimal statistical rates. In Proc. of ICML, pages 5650–5659, 2018.
  49. Bayesian nonparametric federated learning of neural networks. In Proc. of ICML, pages 7252–7261, 2019.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Sungwon Han (20 papers)
  2. Hyeonho Song (3 papers)
  3. Sungwon Park (19 papers)
  4. Meeyoung Cha (63 papers)

Summary

We haven't generated a summary for this paper yet.