Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Resilience in Online Federated Learning: Mitigating Model-Poisoning Attacks via Partial Sharing (2403.13108v2)

Published 19 Mar 2024 in cs.LG, cs.CR, cs.DC, and eess.SP

Abstract: Federated learning (FL) allows training machine learning models on distributed data without compromising privacy. However, FL is vulnerable to model-poisoning attacks where malicious clients tamper with their local models to manipulate the global model. In this work, we investigate the resilience of the partial-sharing online FL (PSO-Fed) algorithm against such attacks. PSO-Fed reduces communication overhead by allowing clients to share only a fraction of their model updates with the server. We demonstrate that this partial sharing mechanism has the added advantage of enhancing PSO-Fed's robustness to model-poisoning attacks. Through theoretical analysis, we show that PSO-Fed maintains convergence even under Byzantine attacks, where malicious clients inject noise into their updates. Furthermore, we derive a formula for PSO-Fed's mean square error, considering factors like stepsize, attack probability, and the number of malicious clients. Interestingly, we find a non-trivial optimal stepsize that maximizes PSO-Fed's resistance to these attacks. Extensive numerical experiments confirm our theoretical findings and showcase PSO-Fed's superior performance against model-poisoning attacks compared to other leading FL algorithms.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (42)
  1. H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. Y. Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Proc. Int. Conf. Artif. Intell. Stat., Apr. 2017, pp. 1273–1282.
  2. V. Smith, C. Chiang, M. Sanjabi, and A. S. Talwalkar, “Federated multi-task learning,” Adv. Neural Inf. Process. Syst., vol. 30, Jan. 2017.
  3. X. Wang, C. Wang, X. Li, V. C. M. Leung, and T. Taleb, “Federated deep reinforcement learning for internet of things with decentralized cooperative edge caching,” IEEE Internet Things J., vol. 7, no. 10, pp. 9441–9455, Oct. 2020.
  4. Y. Zhao, J. Zhao, L. Jiang, R. Tan, D. Niyato, Z. Li, L. Lyu, and Y. Liu, “Privacy-preserving blockchain-based federated learning for IoT devices,” IEEE Internet Things J., vol. 8, no. 3, pp. 1817–1829, 2021.
  5. Q. Yang, Y. L., T. Chen, and Y. Tong, “Federated machine learning: Concept and applications,” ACM Trans. Intell. Syst. Technol., vol. 10, no. 2, pp. 1–19, Feb. 2019.
  6. Z. Zhao, C. Feng, W. Hong, J. Jiang, C. Jia, T. Q. S. Quek, and M. Peng, “Federated learning with non-IID data in wireless networks,” ” IEEE Trans. Wireless Commun., vol. 21, no. 3, pp. 1927–1942, Mar. 2022.
  7. E. Lari, V. C. Gogineni, R. Arablouei, and S. Werner, “Resource-efficient federated learning robust to communication errors,” in Proc. IEEE Stat. Signal Process. Workshop, 2023, pp. 265–269.
  8. V. C. Gogineni, S. Werner, Y.-F. Huang, and A. Kuh, “Communication-efficient online federated learning strategies for kernel regression,” IEEE Internet Things J., vol. 10, no. 5, pp. 4531–4544, Mar. 2023.
  9. ——, “Communication-efficient online federated learning framework for nonlinear regression,” in Proc. IEEE Int. Conf. Acoust. Speech Signal Process., May 2022, pp. 5228–5232.
  10. E. Lari, V. C. Gogineni, R. Arablouei, and S. Werner, “Continual local updates for federated learning with enhanced robustness to link noise,” in Proc. Asia-Pacific Signal Inf. Process. Assoc., 2023, pp. 1199–1203.
  11. J. Bernstein, Y.-X. Wang, K. Azizzadenesheli, and A. Anandkumar, “signSGD: Compressed optimisation for non-convex problems,” in Proc. Int. Conf. Mach. Learn., 2018, pp. 560–569.
  12. R. Jin, Y. Huang, X. He, H. Dai, and T. Wu, “Stochastic-sign SGD for federated learning with theoretical guarantees,” arXiv preprint arXiv:2002.10940, Feb. 2020.
  13. X. Fan, Y. Wang, Y. Huo, and Z. Tian, “1-bit compressive sensing for efficient federated learning over the air,” IEEE Trans. Wireless Commun., Oct. 2022.
  14. D. Rothchild, A. Panda, E. Ullah, N. Ivkin, I. Stoica, V. Braverman, J. Gonzalez, and R. Arora, “FetchSGD: Communication-efficient federated learning with sketching,” in Proc. Int. Conf. Mach. Learn., Jul. 2020.
  15. A. Kuh, “Real time kernel learning for sensor networks using principles of federated learning,” in Proc. IEEE Int. Conf. Asia-Pacific Signal Info. Process. Assoc., Dec. 2021, pp. 2089–2093.
  16. E. M. E. Mhamdi, R. Guerraoui, and S. Rouault, “The hidden vulnerability of distributed learning in byzantium,” in Proc. Int. Conf. Mach. Learn., Jul. 2018, pp. 3521–3530.
  17. X. Cao and N. Gong, “MPAF: Model poisoning attacks to federated learning based on fake clients,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., Jun. 2022, pp. 3395–3403.
  18. Y. Chen, L. Su, and J. Xu, “Distributed statistical machine learning in adversarial settings: Byzantine gradient descent,” Proc. ACM Meas. Anal. Comput. Syst., vol. 1, no. 2, pp. 1–25, 2017.
  19. J. Bernstein, J. Zhao, K. Azizzadenesheli, and A. Anandkumar, “signSGD with majority vote is communication efficient and fault tolerant,” in Proc. Int. Conf. Learn. Represent., 2019.
  20. M. Fang, X. Cao, J. Jia, and N. Gong, “Local model poisoning attacks to Byzantine-Robust federated learning,” in USENIX Security Symp., 2020, pp. 1605–1622.
  21. X. Cao, M. Fang, J. Liu, and N. Z. Gong, “FLTrust: Byzantine-robust federated learning via trust bootstrapping,” arXiv preprint arXiv:2012.13995, 2020.
  22. X. Cao, J. Jia, Z. Zhang, and N. Z. Gong, “FedRecover: Recovering from poisoning attacks in federated learning using historical information,” in Proc. IEEE Symp. Security Privacy, Jul. 2023, pp. 326–343.
  23. G. Xia, J. Chen, C. Yu, and J. Ma, “Poisoning attacks in federated learning: A survey,” IEEE Access, vol. 11, pp. 10 708–10 722, 2023.
  24. F. Hu, W. Zhou, K. Liao, H. Li, and D. Tong, “Toward federated learning models resistant to adversarial attacks,” IEEE Internet Things J., vol. 10, no. 19, pp. 16 917–16 930, 2023.
  25. G. Sun, Y. Cong, J. Dong, Q. Wang, L. Lyu, and J. Liu, “Data poisoning attacks on federated machine learning,” IEEE Internet Things J., vol. 9, no. 13, pp. 11 365–11 375, 2022.
  26. Y. Chen, X. Zhu, X. Gong, X. Yi, and S. Li, “Data poisoning attacks in internet-of-vehicle networks: Taxonomy, state-of-the-art, and future directions,” IEEE Trans. Industr. Inform., vol. 19, no. 1, pp. 20–28, 2023.
  27. B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Šrndić, P. Laskov, G. Giacinto, and F. Roli, “Evasion attacks against machine learning at test time,” in Proc. Mach. Learn. Knowl. Discovery Databases.   Springer, 2013, pp. 387–402.
  28. Y. Liu, S. Ma, Y. Aafer, W.-C. Lee, J. Zhai, W. Wang, and X. Zhang, “Trojaning attack on neural networks,” in Proc. Net. Dist. Syst. Security Symp.   Internet Soc, 2018.
  29. R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership inference attacks against machine learning models,” in Proc. IEEE Symp. Security Privacy, 2017, pp. 3–18.
  30. L. Liu, Y. Wang, G. Liu, K. Peng, and C. Wang, “Membership inference attacks against machine learning models via prediction sensitivity,” IEEE Trans. Dependable Secure Computing, vol. 20, no. 3, pp. 2341–2347, 2023.
  31. Z. Wang, Q. Kang, X. Zhang, and Q. Hu, “Defense strategies toward model poisoning attacks in federated learning: A survey,” in Proc. IEEE Wireless Comm. Net. Conf., 2022, pp. 548–553.
  32. D. Yin, Y. Chen, R. Kannan, and P. Bartlett, “Byzantine-robust distributed learning: Towards optimal statistical rates,” in Proc. Int. Conf. Mach. Learn., 2018, pp. 5650–5659.
  33. Z. Liu, K. Zheng, L. Hou, H. Yang, and K. Yang, “A novel blockchain-assisted aggregation scheme for federated learning in IoT networks,” IEEE Internet Things J., vol. 10, no. 19, pp. 17 544–17 556, 2023.
  34. R. Jin, Y. Liu, Y. Huang, X. He, T. Wu, and H. Dai, “Sign-based gradient descent with heterogeneous data: Convergence and byzantine resilience,” IEEE Trans. Neural Netw. Learn. Syst., pp. 1–13, 2024.
  35. E. Lari, V. C. Gogineni, R. Arablouei, and S. Werner, “On the resilience of online federated learning to model poisoning attacks through partial sharing,” in Proc. IEEE Int. Conf. Acoust. Speech Signal Process., Apr. 2024, pp. 9201–9205.
  36. R. Arablouei, S. Werner, Y.-F. Huang, and K. Doğançay, “Distributed least mean-square estimation with partial diffusion,” IEEE Trans. Signal Process., vol. 62, no. 2, pp. 472–484, Jan. 2014.
  37. R. Arablouei, K. Doğançay, S. Werner, and Y.-F. Huang, “Adaptive distributed estimation based on recursive least-squares and partial diffusion,” IEEE Trans. Signal Process., vol. 62, no. 14, pp. 3510–3522, Jul. 2014.
  38. B. Kailkhura, S. Brahma, and P. K. Varshney, “Data falsification attacks on consensus-based detection systems,” IEEE Trans. Signal Inf. Process. Netw., vol. 3, no. 1, pp. 145–158, Mar. 2017.
  39. R. H. Koning, H. Neudecker, and T. Wansbeek, “Block kronecker products and the vecb operator,” Linear algebra and its applications, vol. 149, pp. 165–184, 1991.
  40. H. C. Shin and A. Sayed, “Mean-square performance of a family of affine projection algorithms,” IEEE Trans. Signal Process., vol. 52, no. 1, pp. 90–102, Jan. 2004.
  41. K. B. Petersen and M. S. Pedersen, “The matrix cookbook,” Nov. 2012, version 20121115. [Online]. Available: http://www2.compute.dtu.dk/pubdb/pubs/3274-full.html
  42. L. Isserlis, “On a formula for the product-moment coefficient of any order of a normal frequency distribution in any number of variables,” Biometrika, vol. 12, pp. 134–139, Nov. 1918.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com