Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Privacy Inference-Empowered Stealthy Backdoor Attack on Federated Learning under Non-IID Scenarios (2306.08011v1)

Published 13 Jun 2023 in cs.LG, cs.AI, and cs.CR

Abstract: Federated learning (FL) naturally faces the problem of data heterogeneity in real-world scenarios, but this is often overlooked by studies on FL security and privacy. On the one hand, the effectiveness of backdoor attacks on FL may drop significantly under non-IID scenarios. On the other hand, malicious clients may steal private data through privacy inference attacks. Therefore, it is necessary to have a comprehensive perspective of data heterogeneity, backdoor, and privacy inference. In this paper, we propose a novel privacy inference-empowered stealthy backdoor attack (PI-SBA) scheme for FL under non-IID scenarios. Firstly, a diverse data reconstruction mechanism based on generative adversarial networks (GANs) is proposed to produce a supplementary dataset, which can improve the attacker's local data distribution and support more sophisticated strategies for backdoor attacks. Based on this, we design a source-specified backdoor learning (SSBL) strategy as a demonstration, allowing the adversary to arbitrarily specify which classes are susceptible to the backdoor trigger. Since the PI-SBA has an independent poisoned data synthesis process, it can be integrated into existing backdoor attacks to improve their effectiveness and stealthiness in non-IID scenarios. Extensive experiments based on MNIST, CIFAR10 and Youtube Aligned Face datasets demonstrate that the proposed PI-SBA scheme is effective in non-IID FL and stealthy against state-of-the-art defense methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (39)
  1. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial intelligence and statistics, 2017, pp. 1273–1282.
  2. P. Kairouz, H. B. McMahan et al., “Advances and open problems in federated learning,” Foundations and Trends® in Machine Learning, vol. 14, no. 1–2, pp. 1–210, 2021.
  3. T. Gu, B. Dolan-Gavitt, and S. Garg, “Badnets: Identifying vulnerabilities in the machine learning model supply chain,” arXiv preprint arXiv:1708.06733, 2017.
  4. Z. Yan, J. Wu, G. Li, S. Li, and M. Guizani, “Deep neural backdoor in semi-supervised learning: Threats and countermeasures,” IEEE Transactions on Information Forensics and Security, vol. 16, pp. 4827–4842, 2021.
  5. Z. Yang, G. Li, J. Wu, and W. Yang, “Propagable backdoors over blockchain-based federated learning via sample-specific eclipse,” in GLOBECOM 2022-2022 IEEE Global Communications Conference.   IEEE, 2022, pp. 2579–2584.
  6. E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, and V. Shmatikov, “How to backdoor federated learning,” in International Conference on Artificial Intelligence and Statistics, 2020, pp. 2938–2948.
  7. A. N. Bhagoji, S. Chakraborty, P. Mittal, and S. Calo, “Analyzing federated learning through an adversarial lens,” in International Conference on Machine Learning (ICML), 2019, pp. 634–643.
  8. H. Wang, K. Sreenivasan, S. Rajput et al., “Attack of the tails: Yes, you really can backdoor federated learning,” Advances in Neural Information Processing Systems, vol. 33, pp. 16 070–16 084, 2020.
  9. J. Zhang, B. Chen, X. Cheng, H. T. T. Binh, and S. Yu, “Poisongan: Generative poisoning attacks against federated learning in edge computing systems,” IEEE Internet of Things Journal, vol. 8, no. 5, pp. 3310–3322, 2020.
  10. G. Li, J. Wu, S. Li, W. Yang, and C. Li, “Multitentacle federated learning over software-defined industrial internet of things against adaptive poisoning attacks,” IEEE Transactions on Industrial Informatics, vol. 19, pp. 1260–1269, 2023.
  11. Z. Zhang, A. Panda, L. Song, Y. Yang, M. Mahoney, P. Mittal, R. Kannan, and J. Gonzalez, “Neurotoxin: Durable backdoors in federated learning,” in International Conference on Machine Learning, 2022, pp. 26 429–26 446.
  12. S. Zawad, A. Ali, P.-Y. Chen, A. Anwar, Y. Zhou, N. Baracaldo, Y. Tian, and F. Yan, “Curse or Redemption? How Data Heterogeneity Affects the Robustness of Federated Learning,” AAAI, vol. 35, no. 12, pp. 10 807–10 814, 2021.
  13. B. Hitaj, G. Ateniese, and F. Perez-Cruz, “Deep models under the gan: Information leakage from collaborative deep learning,” in Proceedings of the ACM SIGSAC Conference on Computer and Communications Security, 2017, p. 603–618.
  14. Y. Li, Y. Jiang, Z. Li, and S.-T. Xia, “Backdoor learning: A survey,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–18, 2022.
  15. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” Commun. ACM, vol. 63, no. 11, p. 139–144, 2020.
  16. Y. Zhao, M. Li, L. Lai, N. Suda, D. Civin, and V. Chandra, “Federated Learning with Non-IID Data,” arXiv preprint arXiv:1806.00582, 2018.
  17. F. Yu, A. S. Rawat, A. Menon, and S. Kumar, “Federated Learning with Only Positive Labels,” in ICML, 2020, pp. 10 946–10 956.
  18. C. Xie, K. Huang, P.-Y. Chen, and B. Li, “Dba: Distributed backdoor attacks against federated learning,” in International Conference on Learning Representations, 2020, pp. 1–18.
  19. Z. Sun, P. Kairouz, A. T. Suresh, and H. B. McMahan, “Can you really backdoor federated learning?” arXiv preprint arXiv:1911.07963, 2019.
  20. L. Wolf, T. Hassner, and I. Maoz, “Face recognition in unconstrained videos with matched background similarity,” in IEEE CVPR, 2011, pp. 529–534.
  21. X. Gong, Y. Chen, H. Huang, Y. Liao, S. Wang, and Q. Wang, “Coordinated backdoor attacks against federated learning with model-dependent triggers,” IEEE network, vol. 36, no. 1, pp. 84–90, 2022.
  22. L. Melis, C. Song, E. De Cristofaro, and V. Shmatikov, “Exploiting unintended feature leakage in collaborative learning,” in IEEE symposium on security and privacy (SP).   IEEE, 2019, pp. 691–706.
  23. Z. Wang, M. Song, Z. Zhang, Y. Song, Q. Wang, and H. Qi, “Beyond inferring class representatives: User-level privacy leakage from federated learning,” in IEEE INFOCOM, 2019, pp. 2512–2520.
  24. A. Odena, “Semi-Supervised Learning with Generative Adversarial Networks,” arXiv preprint arXiv:1606.01583, 2016.
  25. H. Chen, Y. Wang, C. Xu, Z. Yang, C. Liu, B. Shi, C. Xu, C. Xu, and Q. Tian, “Data-free learning of student networks,” in IEEE CVPR, 2019, pp. 3514–3522.
  26. Q. Mao, H.-Y. Lee, H.-Y. Tseng, S. Ma, and M.-H. Yang, “Mode seeking generative adversarial networks for diverse image synthesis,” in IEEE CVPR, 2019, pp. 1429–1437.
  27. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Commun. ACM, vol. 60, no. 6, p. 84–90, 2017.
  28. A. Radford, L. Metz, and S. Chintala, “Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks,” arXiv preprint arXiv:1511.06434, 2016.
  29. A. Odena, V. Dumoulin, and C. Olah, “Deconvolution and checkerboard artifacts,” Distill, 2016.
  30. C. Fung, C. J. Yoon, and I. Beschastnikh, “The limitations of federated learning in sybil settings.” in RAID, 2020, pp. 301–316.
  31. T. D. Nguyen, P. Rieger, M. H. Yalame et al., “Flguard: Secure and private federated learning,” Crytography and Security, no. Preprint, 2021.
  32. T. D. Nguyen, P. Rieger, H. Chen et al., “Flame: Taming backdoors in federated learning,” in USENIX Security Symposium (USENIX Security 22), 2022, pp. 1415–1432.
  33. X. Gong, Y. Chen, Q. Wang, and W. Kong, “Backdoor attacks and defenses in federated learning: State-of-the-art, taxonomy, and future directions,” IEEE Wireless Communications, pp. 1–7, 2022.
  34. K. Bonawitz, V. Ivanov, B. Kreuter, A. Marcedone, H. B. McMahan, S. Patel, D. Ramage, A. Segal, and K. Seth, “Practical secure aggregation for privacy-preserving machine learning,” in ACM SIGSAC Conference on Computer and Communications Security, 2017, pp. 1175–1191.
  35. B. Wang, Y. Yao, S. Shan, H. Li, B. Viswanath, H. Zheng, and B. Y. Zhao, “Neural cleanse: Identifying and mitigating backdoor attacks in neural networks,” in IEEE Symposium on Security and Privacy (SP), 2019, pp. 707–723.
  36. B. Chen, W. Carvalho, N. Baracaldo, H. Ludwig, B. Edwards, T. Lee, I. Molloy, and B. Srivastava, “Detecting backdoor attacks on deep neural networks by activation clustering,” arXiv preprint arXiv:1811.03728, 2018.
  37. J. Xia, T. Wang, J. Ding, X. Wei, and M. Chen, “Eliminating backdoor triggers for deep neural networks using attention relation graph distillation,” arXiv preprint arXiv:2204.09975, 2022.
  38. K. Liu, B. Dolan-Gavitt, and S. Garg, “Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks,” arXiv preprint arXiv:1805.12185, 2018.
  39. T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “Federated optimization in heterogeneous networks,” in Proceedings of Machine Learning and Systems, vol. 2, 2020, pp. 429–450.
Citations (8)

Summary

We haven't generated a summary for this paper yet.