FedMID: A Data-Free Method for Using Intermediate Outputs as a Defense Mechanism Against Poisoning Attacks in Federated Learning (2404.11905v1)
Abstract: Federated learning combines local updates from clients to produce a global model, which is susceptible to poisoning attacks. Most previous defense strategies relied on vectors derived from projections of local updates on a Euclidean space; however, these methods fail to accurately represent the functionality and structure of local models, resulting in inconsistent performance. Here, we present a new paradigm to defend against poisoning attacks in federated learning using functional mappings of local models based on intermediate outputs. Experiments show that our mechanism is robust under a broad range of computing conditions and advanced attack scenarios, enabling safer collaboration among data-sensitive participants via federated learning.
- How to backdoor federated learning. In Proc. of AISTATS, pages 2938–2948, 2020.
- A little is enough: Circumventing defenses for distributed learning. In Advances of NeurIPS, volume 32, 2019.
- Distill on the go: Online knowledge distillation in self-supervised learning. In Proc. of CVPR, pages 2678–2687, 2021.
- Machine learning with adversaries: Byzantine tolerant gradient descent. In Advances of NeurIPS, volume 30, 2017.
- Towards federated learning at scale: System design. In Proc. of MLSys, volume 1, pages 374–388, 2019.
- Expanding the reach of federated learning by reducing client resource requirements. arXiv preprint arXiv:1812.07210, 2018.
- Fltrust: Byzantine-robust federated learning via trust bootstrapping. In Proc. of NDSS, 2021.
- Local model poisoning attacks to {{\{{Byzantine-Robust}}\}} federated learning. In Proc. of USENIX Security, pages 1605–1622, 2020.
- Attack-resistant federated learning with residual-based reweighting. arXiv preprint arXiv:1912.11464, 2019.
- The limitations of federated learning in sybil settings. In Proc. of RAID, 2020.
- Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733, 2017.
- FedBoost: A communication-efficient algorithm for federated learning. In Hal Daumé III and Aarti Singh, editors, Proc. of ICML, volume 119 of Proceedings of Machine Learning Research, pages 3973–3983. PMLR, 13–18 Jul 2020.
- Fedx: Unsupervised federated learning with cross knowledge distillation. In Proc. of ECCV, pages 691–707, 2022.
- Towards attack-tolerant federated learning via critical parameter analysis. In Proc. of the IEEE/CVF International Conference on Computer Vision, pages 4999–5008, 2023.
- Deep residual learning for image recognition. In Proc. of CVPR, pages 770–778, 2016.
- Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2(7), 2015.
- FjORD: Fair and accurate federated learning under heterogeneous targets with ordered dropout. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, editors, Advances of NeurIPS, 2021.
- Scaffold: Stochastic controlled averaging for federated learning. In Proc. of ICML, pages 5132–5143, 2020.
- Byzantine-robust learning on heterogeneous datasets via bucketing. In Proc. of ICLR, 2022.
- Self-knowledge distillation with progressive refinement of targets. In Proc. of ICCV, pages 6567–6576, 2021.
- Federated learning: Strategies for improving communication efficiency. In Advances of NeurIPS Workshop on Private Multi-Party Machine Learning, 2016.
- Learning multiple layers of features from tiny images. 2009.
- Ya Le and Xuan Yang. Tiny imagenet visual recognition challenge. Stanford CS 231N, 7(7):3, 2015.
- Layer-wise adaptive model aggregation for scalable federated learning. Proc. of AAAI, 2023.
- Federated optimization in heterogeneous networks. Proc. of Machine Learning and Systems, 2:429–450, 2020.
- Model-contrastive federated learning. In Proc. of CVPR, pages 10713–10722, 2021.
- Fedbn: Federated learning on non-iid features via local batch normalization. In Proc. of ICLR, 2021.
- Threats to federated learning: A survey. arXiv preprint arXiv:2003.02133, 2020.
- Communication-efficient learning of deep networks from decentralized data. In Proc. of AISTATS, pages 1273–1282, 2017.
- Reading digits in natural images with unsupervised feature learning. 2011.
- Knowledge sharing via domain adaptation in customs fraud detection. In Proc. of AAAI, volume 36, pages 12062–12070, 2022.
- Feddefender: Client-side attack-tolerant federated learning. In Proc. of SIGKDD, pages 1850–1861, 2023.
- Robust aggregation for federated learning. IEEE Transactions on Signal Processing, 70:1142–1154, 2022.
- Discovering and overcoming limitations of noise-engineered data-free knowledge distillation. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances of NeurIPS, volume 35, pages 4902–4912. Curran Associates, Inc., 2022.
- Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proc. of ICCV, pages 618–626, 2017.
- Poison frogs! targeted clean-label poisoning attacks on neural networks. In Advances of NeurIPS, volume 31, 2018.
- Manipulating the byzantine: Optimizing model poisoning attacks and defenses for federated learning. In Proc. of NDSS Symposium, 2021.
- Certified defenses for data poisoning attacks. In Advances of NeurIPS, volume 30, 2017.
- Can you really backdoor federated learning? arXiv preprint arXiv:1911.07963, 2019.
- Paul Voigt and Axel Von dem Bussche. The EU General Data Protection Regulation (GDPR): A Practical Guide. Springer, 2017.
- Federated learning with matched averaging. In Proc. of ICLR, 2020.
- Defense strategies toward model poisoning attacks in federated learning: A survey. In IEEE Wireless Communications and Networking Conference (WCNC), pages 548–553, 2022.
- Generalized Byzantine-tolerant SGD. arXiv preprint arXiv:1802.10116, 2018.
- Dba: Distributed backdoor attacks against federated learning. In Proc. of ICLR, 2020.
- Mutual contrastive learning for visual representation learning. In Proc. of AAAI, volume 36, pages 3045–3053, 2022.
- Personalized federated learning with inferred collaboration graphs. In Proc. of ICML, pages 39801–39817, 2023.
- Feddisco: Federated learning with discrepancy-aware collaboration. In Proc. of ICML, pages 39879–39902, 2023.
- Byzantine-robust distributed learning: Towards optimal statistical rates. In Proc. of ICML, pages 5650–5659, 2018.
- Bayesian nonparametric federated learning of neural networks. In Proc. of ICML, pages 7252–7261, 2019.
- Sungwon Han (20 papers)
- Hyeonho Song (3 papers)
- Sungwon Park (19 papers)
- Meeyoung Cha (63 papers)