Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FedDefender: Backdoor Attack Defense in Federated Learning (2307.08672v2)

Published 2 Jul 2023 in cs.CR, cs.AI, cs.CV, and cs.LG

Abstract: Federated Learning (FL) is a privacy-preserving distributed machine learning technique that enables individual clients (e.g., user participants, edge devices, or organizations) to train a model on their local data in a secure environment and then share the trained model with an aggregator to build a global model collaboratively. In this work, we propose FedDefender, a defense mechanism against targeted poisoning attacks in FL by leveraging differential testing. Our proposed method fingerprints the neuron activations of clients' models on the same input and uses differential testing to identify a potentially malicious client containing a backdoor. We evaluate FedDefender using MNIST and FashionMNIST datasets with 20 and 30 clients, and our results demonstrate that FedDefender effectively mitigates such attacks, reducing the attack success rate (ASR) to 10\% without deteriorating the global model performance.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (21)
  1. 2023. Training a Classifier — PyTorch Tutorials 1.13.1+cu117 documentation. https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html. (Accessed on 01/26/2023).
  2. Towards better understanding of gradient-based attribution methods for deep neural networks. arXiv preprint arXiv:1711.06104 (2017).
  3. How to backdoor federated learning. In International Conference on Artificial Intelligence and Statistics. PMLR, 2938–2948.
  4. Flower: A Friendly Federated Learning Research Framework. arXiv preprint arXiv:2007.14390 (2020).
  5. High-performance neural networks for visual object classification. arXiv preprint arXiv:1102.0183 (2011).
  6. FedDebug: Systematic Debugging for Federated Learning Applications. In Proceedings of the 45th International Conference on Software Engineering (Melbourne, Victoria, Australia) (ICSE ’23). IEEE Press, 512–523. https://doi.org/10.1109/ICSE48619.2023.00053
  7. Perception and practices of differential testing. In 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP). IEEE, 71–80.
  8. Najeeb Jebreel and Josep Domingo-Ferrer. 2022. FL-Defender: Combating Targeted Attacks in Federated Learning. arXiv preprint arXiv:2207.00872 (2022).
  9. Influence-directed explanations for deep convolutional networks. In 2018 IEEE International Test Conference (ITC). IEEE, 1–8.
  10. Federated optimization in heterogeneous networks. Proceedings of Machine Learning and Systems 2 (2020), 429–450.
  11. Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. Advances in neural information processing systems 30 (2017).
  12. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics. PMLR, 1273–1282.
  13. Defending against backdoors in federated learning with robust learning rate. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 9268–9276.
  14. Deepxplore: Automated whitebox testing of deep learning systems. In proceedings of the 26th Symposium on Operating Systems Principles. 1–18.
  15. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision. 618–626.
  16. Learning important features through propagating activation differences. In International conference on machine learning. PMLR, 3145–3153.
  17. Can you really backdoor federated learning? arXiv preprint arXiv:1911.07963 (2019).
  18. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In 2019 IEEE Symposium on Security and Privacy (SP). IEEE, 707–723.
  19. Towards a Defense against Backdoor Attacks in Continual Federated Learning. arXiv preprint arXiv:2205.11736 (2022).
  20. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms. arXiv:cs.LG/1708.07747 [cs.LG]
  21. Finding and understanding bugs in C compilers. In Proceedings of the 32nd ACM SIGPLAN conference on Programming language design and implementation. 283–294.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Waris Gill (8 papers)
  2. Ali Anwar (64 papers)
  3. Muhammad Ali Gulzar (15 papers)