FedRFQ: Prototype-Based Federated Learning with Reduced Redundancy, Minimal Failure, and Enhanced Quality (2401.07558v1)
Abstract: Federated learning is a powerful technique that enables collaborative learning among different clients. Prototype-based federated learning is a specific approach that improves the performance of local models under non-IID (non-Independently and Identically Distributed) settings by integrating class prototypes. However, prototype-based federated learning faces several challenges, such as prototype redundancy and prototype failure, which limit its accuracy. It is also susceptible to poisoning attacks and server malfunctions, which can degrade the prototype quality. To address these issues, we propose FedRFQ, a prototype-based federated learning approach that aims to reduce redundancy, minimize failures, and improve \underline{q}uality. FedRFQ leverages a SoftPool mechanism, which effectively mitigates prototype redundancy and prototype failure on non-IID data. Furthermore, we introduce the BFT-detect, a BFT (Byzantine Fault Tolerance) detectable aggregation algorithm, to ensure the security of FedRFQ against poisoning attacks and server malfunctions. Finally, we conduct experiments on three different datasets, namely MNIST, FEMNIST, and CIFAR-10, and the results demonstrate that FedRFQ outperforms existing baselines in terms of accuracy when handling non-IID data.
- Y. Tan, G. Long, L. Liu, T. Zhou, Q. Lu, J. Jiang, and C. Zhang, “Fedproto: Federated prototype learning across heterogeneous clients,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 8, 2022, pp. 8432–8440.
- J. Snell, K. Swersky, and R. Zemel, “Prototypical networks for few-shot learning,” Advances in Neural Information Processing Systems, vol. 30, 2017.
- Y. Qiao, S.-B. Park, S. M. Kang, and C. S. Hong, “Prototype helps federated learning: Towards faster convergence,” arXiv preprint arXiv:2303.12296, 2023.
- Z. Li, Y. Sun, J. Shao, Y. Mao, J. H. Wang, and J. Zhang, “Feature matching data synthesis for non-iid federated learning,” arXiv preprint arXiv:2308.04761, 2023.
- Y. Qiao, H. Q. Le, and C. S. Hong, “Boosting federated learning convergence with prototype regularization,” arXiv preprint arXiv:2307.10575, 2023.
- Y. Qiao, M. S. Munir, A. Adhikary, H. Q. Le, A. D. Raha, C. Zhang, and C. S. Hong, “Mp-fedcl: Multi-prototype federated contrastive learning for edge intelligence,” arXiv preprint arXiv:2304.01950, 2023.
- A. P. Kalapaaking, I. Khalil, and X. Yi, “Blockchain-based federated learning with smpc model verification against poisoning attack for healthcare systems,” IEEE Transactions on Emerging Topics in Computing, 2023.
- Q. Hu, Z. Wang, M. Xu, and X. Cheng, “Blockchain and federated edge learning for privacy-preserving mobile crowdsensing,” IEEE Internet of Things Journal, vol. 10, no. 14, pp. 12 000–12 011, 2023.
- M. Cao, Y. Zhang, Z. Ma, and M. Zhao, “C2s: Class-aware client selection for effective aggregation in federated learning,” High-Confidence Computing, vol. 2, no. 3, p. 100068, 2022.
- M. G. Arivazhagan, V. Aggarwal, A. K. Singh, and S. Choudhary, “Federated learning with personalization layers,” arXiv preprint arXiv:1912.00818, 2019.
- M. Xie, G. Long, T. Shen, T. Zhou, X. Wang, J. Jiang, and C. Zhang, “Multi-center federated learning,” arXiv preprint arXiv:2108.08647, 2021.
- L. Collins, H. Hassani, A. Mokhtari, and S. Shakkottai, “Exploiting shared representations for personalized federated learning,” in International Conference on Machine Learning. PMLR, 2021, pp. 2089–2099.
- X. Li, K. Huang, W. Yang, S. Wang, and Z. Zhang, “On the convergence of fedavg on non-iid data,” arXiv preprint arXiv:1907.02189, 2019.
- T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “Federated optimization in heterogeneous networks,” Proceedings of Machine Learning and Systems, vol. 2, pp. 429–450, 2020.
- Q. Li, B. He, and D. Song, “Model-contrastive federated learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 10 713–10 722.
- Q. Li, Y. Diao, Q. Chen, and B. He, “Federated learning on non-iid data silos: An experimental study,” in 2022 IEEE 38th International Conference on Data Engineering (ICDE). IEEE, 2022, pp. 965–978.
- M. Luo, F. Chen, D. Hu, Y. Zhang, J. Liang, and J. Feng, “No fear of heterogeneity: Classifier calibration for federated learning with non-iid data,” Advances in Neural Information Processing Systems, vol. 34, pp. 5972–5984, 2021.
- L. Zhang, L. Shen, L. Ding, D. Tao, and L.-Y. Duan, “Fine-tuning global model via data-free knowledge distillation for non-iid federated learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 10 174–10 183.
- Q. Wu, X. Chen, Z. Zhou, and J. Zhang, “Fedhome: Cloud-edge based personalized federated learning for in-home health monitoring,” IEEE Transactions on Mobile Computing, vol. 21, no. 8, pp. 2818–2832, 2022.
- C. Ye, H. Zheng, Z. Hu, and M. Zheng, “Pfedsa: Personalized federated multi-task learning via similarity awareness,” in 2023 IEEE International Parallel and Distributed Processing Symposium (IPDPS), 2023, pp. 480–488.
- J. Mills, J. Hu, and G. Min, “Multi-task federated learning for personalised deep neural networks in edge computing,” IEEE Transactions on Parallel and Distributed Systems, vol. 33, no. 3, pp. 630–641, 2022.
- Y. Wu, D. Zeng, Z. Wang, Y. Sheng, L. Yang, A. J. James, Y. Shi, and J. Hu, “Federated contrastive learning for dermatological disease diagnosis via on-device learning,” in 2021 IEEE/ACM International Conference on Computer Aided Design (ICCAD). IEEE, 2021, pp. 1–7.
- T. Zhou, J. Zhang, and D. Tsang, “Fedfa: federated learning with feature anchors to align feature and classifier for heterogeneous data,” arXiv preprint arXiv:2211.09299, 2022.
- J. Xu, X. Tong, and S.-L. Huang, “Personalized federated learning with feature alignment and classifier collaboration,” arXiv preprint arXiv:2306.11867, 2023.
- Y. Dai, Z. Chen, J. Li, S. Heinecke, L. Sun, and R. Xu, “Tackling data heterogeneity in federated learning with class prototypes,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 6, 2023, pp. 7314–7322.
- X. Mu, Y. Shen, K. Cheng, X. Geng, J. Fu, T. Zhang, and Z. Zhang, “Fedproc: Prototypical contrastive federated learning on non-iid data,” Future Generation Computer Systems, vol. 143, pp. 93–104, 2023.
- X. Zhang, B. Zhang, W. Yu, and X. Kang, “Federated deep learning with prototype matching for object extraction from very-high-resolution remote sensing images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–16, 2023.
- X. Yu, D. Wang, M. McKeown, and Z. J. Wang, “Contrastive-enhanced domain generalization with federated learning,” IEEE Transactions on Artificial Intelligence, pp. 1–8, 2023.
- X. Qu, S. Wang, Q. Hu, and X. Cheng, “Proof of federated learning: A novel energy-recycling consensus algorithm,” IEEE Transactions on Parallel and Distributed Systems, vol. 32, no. 8, pp. 2074–2085, 2021.
- Z. Chen, P. Tian, W. Liao, and W. Yu, “Towards multi-party targeted model poisoning attacks against federated learning systems,” High-Confidence Computing, vol. 1, no. 1, p. 100002, 2021.
- M. P. Uddin, Y. Xiang, X. Lu, J. Yearwood, and L. Gao, “Mutual information driven federated learning,” IEEE Transactions on Parallel and Distributed Systems, vol. 32, no. 7, pp. 1526–1538, 2021.
- X. Liu, H. Li, G. Xu, Z. Chen, X. Huang, and R. Lu, “Privacy-enhanced federated learning against poisoning adversaries,” IEEE Transactions on Information Forensics and Security, vol. 16, pp. 4574–4588, 2021.
- J. Le, D. Zhang, X. Lei, L. Jiao, K. Zeng, and X. Liao, “Privacy-preserving federated learning with malicious clients and honest-but-curious servers,” IEEE Transactions on Information Forensics and Security, vol. 18, pp. 4329–4344, 2023.
- Z. Zhang, J. Li, S. Yu, and C. Makaya, “Safelearning: Secure aggregation in federated learning with backdoor detectability,” IEEE Transactions on Information Forensics and Security, vol. 18, pp. 3289–3304, 2023.
- L. Zhao, J. Jiang, B. Feng, Q. Wang, C. Shen, and Q. Li, “Sear: Secure and efficient aggregation for byzantine-robust federated learning,” IEEE Transactions on Dependable and Secure Computing, vol. 19, no. 5, pp. 3329–3342, 2022.
- Y. Tao, S. Cui, W. Xu, H. Yin, D. Yu, W. Liang, and X. Cheng, “Byzantine-resilient federated learning at edge,” IEEE Transactions on Computers, vol. 72, no. 9, pp. 2600–2614, 2023.
- Y. Miao, Z. Liu, H. Li, K.-K. R. Choo, and R. H. Deng, “Privacy-preserving byzantine-robust federated learning via blockchain systems,” IEEE Transactions on Information Forensics and Security, vol. 17, pp. 2848–2861, 2022.
- D. Wen, Y. Li, and F. C. Lau, “Byzantine-resilient online federated learning with applications to network traffic classification,” IEEE Network, vol. 37, no. 4, pp. 145–152, 2023.
- A. Gouissem, K. Abualsaud, E. Yaacoub, T. Khattab, and M. Guizani, “Collaborative byzantine resilient federated learning,” IEEE Internet of Things Journal, vol. 10, no. 18, pp. 15 887–15 899, 2023.
- T. Rückel, J. Sedlmeir, and P. Hofmann, “Fairness, integrity, and privacy in a scalable blockchain-based federated learning system,” Computer Networks, vol. 202, p. 108621, 2022.
- M. Xu, Z. Zou, Y. Cheng, Q. Hu, D. Yu, and X. Cheng, “Spdl: A blockchain-enabled secure and privacy-preserving decentralized learning system,” IEEE Transactions on Computers, vol. 72, no. 2, pp. 548–558, 2023.
- B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial Intelligence and Statistics. PMLR, 2017, pp. 1273–1282.
- A. Stergiou, R. Poppe, and G. Kalliatakis, “Refining activation downsampling with softpool,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 10 357–10 366.
- J. Wang, Q. Liu, H. Liang, G. Joshi, and H. V. Poor, “Tackling the objective inconsistency problem in heterogeneous federated optimization,” Advances in Neural Information Processing Systems, vol. 33, pp. 7611–7623, 2020.