FedSAC: Dynamic Submodel Allocation for Collaborative Fairness in Federated Learning (2405.18291v1)
Abstract: Collaborative fairness stands as an essential element in federated learning to encourage client participation by equitably distributing rewards based on individual contributions. Existing methods primarily focus on adjusting gradient allocations among clients to achieve collaborative fairness. However, they frequently overlook crucial factors such as maintaining consistency across local models and catering to the diverse requirements of high-contributing clients. This oversight inevitably decreases both fairness and model accuracy in practice. To address these issues, we propose FedSAC, a novel Federated learning framework with dynamic Submodel Allocation for Collaborative fairness, backed by a theoretical convergence guarantee. First, we present the concept of "bounded collaborative fairness (BCF)", which ensures fairness by tailoring rewards to individual clients based on their contributions. Second, to implement the BCF, we design a submodel allocation module with a theoretical guarantee of fairness. This module incentivizes high-contributing clients with high-performance submodels containing a diverse range of crucial neurons, thereby preserving consistency across local models. Third, we further develop a dynamic aggregation module to adaptively aggregate submodels, ensuring the equitable treatment of low-frequency neurons and consequently enhancing overall model accuracy. Extensive experiments conducted on three public benchmarks demonstrate that FedSAC outperforms all baseline methods in both fairness and model accuracy. We see this work as a significant step towards incentivizing broader client participation in federated learning. The source code is available at https://github.com/wangzihuixmu/FedSAC.
- Anish Agarwal et al. 2019. A marketplace for data: An algorithmic solution. In Proceedings of the 2019 ACM Conference on Economics and Computation. 701–726.
- Chen Chen et al. 2022. Gear: a margin-based federated adversarial training approach. In International Workshop on Trustable, Verifiable, and Auditable Federated Learning in Conjunction with AAAI, Vol. 2022.
- Elastic Aggregation for Federated Optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 12187–12197.
- FLamby: Datasets and Benchmarks for Cross-Silo Federated Learning in Realistic Healthcare Settings. In NeurIPS 2022-Thirty-sixth Conference on Neural Information Processing Systems.
- FedDC: Federated Learning With Non-IID Data via Local Drift Decoupling and Correction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 10112–10121.
- Feddc: Federated learning with non-iid data via local drift decoupling and correction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10112–10121.
- Efficient Split-Mix Federated Learning for On-Demand and In-Situ Customization. In In Proceedings of the International Conference on Learning Representations.
- Fjord: Fair and accurate federated learning under heterogeneous targets with ordered dropout. Advances in Neural Information Processing Systems 34 (2021), 12876–12889.
- Learn From Others and Be Yourself in Heterogeneous Federated Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 10143–10153.
- Personalized cross-silo federated learning on non-iid data. In Proceedings of the AAAI conference on artificial intelligence, Vol. 35. 7865–7873.
- Federated learning for cyber security: SOC collaboration for malicious URL detection. In 2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS). IEEE, 1316–1321.
- Learning multiple layers of features from tiny images. Master’s thesis, Department of Computer Science, University of Toronto, 2009. (2009).
- Gradient-based learning applied to document recognition. Proc. IEEE 86, 11 (1998), 2278–2324.
- Fedmask: Joint computation and communication-efficient personalized federated learning via heterogeneous masking. In Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems. 42–55.
- On the Effectiveness of Partial Variance Reduction in Federated Learning With Heterogeneous Data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 3964–3973.
- Federated optimization in heterogeneous networks. Proceedings of Machine learning and systems 2 (2020), 429–450.
- Fair Resource Allocation in Federated Learning. In International Conference on Learning Representations.
- On the Convergence of FedAvg on Non-IID Data. In International Conference on Learning Representations.
- Ensemble distillation for robust model fusion in federated learning. Advances in Neural Information Processing Systems 33 (2020), 2351–2363.
- Jian-Hao Luo et al. 2017. Thinet: A filter level pruning method for deep neural network compression. In Proceedings of the IEEE international conference on computer vision. 5058–5066.
- How to democratise and protect AI: Fair and differentially private decentralised deep learning. IEEE Transactions on Dependable and Secure Computing (2020).
- Collaborative fairness in federated learning. In Federated Learning. Springer, 189–204.
- Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics. PMLR, 1273–1282.
- Local Learning Matters: Rethinking Data Heterogeneity in Federated Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 8397–8406.
- Importance estimation for neural network pruning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 11264–11272.
- Pruning Convolutional Neural Networks for Resource Efficient Inference. In In Proceedings of the International Conference on Learning Representations.
- Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011 (2011).
- FLamby: Datasets and Benchmarks for Cross-Silo Federated Learning in Realistic Healthcare Settings. Advances in Neural Information Processing Systems 35 (2022), 5315–5334.
- FedAPEN: Personalized Cross-silo Federated Learning with Adaptability to Statistical Heterogeneity. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 1954–1964.
- Adam Richardson et al. 2020. Budget-bounded incentives for federated learning. In Federated Learning. Springer, 176–188.
- Yuxin Shi et al. 2021. A survey of fairness-aware federated learning. arXiv preprint arXiv:2111.01872 (2021).
- Towards fairness-aware federated learning. IEEE Transactions on Neural Networks and Learning Systems (2023).
- Collaborative machine learning with incentive-aware model rewards. In International Conference on Machine Learning. PMLR, 8927–8936.
- Sebastian U Stich. 2018. Local SGD converges fast and communicates little. arXiv preprint arXiv:1805.09767 (2018).
- Sparsified SGD with memory. Advances in Neural Information Processing Systems 31 (2018).
- Partialfed: Cross-domain personalized federated learning via partial initialization. Advances in Neural Information Processing Systems 34 (2021), 23309–23320.
- Federated learning on non-iid graphs via structural knowledge sharing. In Proceedings of the AAAI conference on artificial intelligence, Vol. 37. 9953–9961.
- Incentivizing collaboration in machine learning via synthetic data rewards. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36. 9448–9456.
- Mutual information driven federated learning. IEEE Transactions on Parallel and Distributed Systems 32, 7 (2020), 1526–1538.
- A principled approach to data valuation for federated learning. Federated Learning: Privacy and Incentive (2020), 153–167.
- Theoretical Convergence Guaranteed Resource-Adaptive Federated Learning with Mixed Heterogeneity. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2444–2455.
- FedAVE: Adaptive data value evaluation framework for collaborative fairness in federated learning. Neurocomputing (2024), 127227.
- Xinyi Xu and Lingjuan Lyu. 2020. A reputation mechanism is all you need: Collaborative fairness and adversarial robustness in federated learning. arXiv preprint arXiv:2011.10464 (2020).
- Gradient driven rewards to guarantee fairness in collaborative machine learning. Advances in Neural Information Processing Systems 34 (2021), 16104–16117.
- Bias-Eliminating Augmentation Learning for Debiased Federated Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 20442–20452.
- Criticalfl: A critical learning periods augmented client selection framework for efficient federated learning. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2898–2907.
- Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology (TIST) 10, 2 (2019), 1–19.
- A fairness-aware incentive scheme for federated learning. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. 393–399.
- Parallel restarted SGD with faster convergence and less communication: Demystifying why model averaging works for deep learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 5693–5700.
- Nisp: Pruning networks using neuron importance score propagation. In Proceedings of the IEEE conference on computer vision and pattern recognition. 9194–9203.
- TCT: Convexifying federated learning using bootstrapped neural tangent kernels. Advances in Neural Information Processing Systems 35 (2022), 30882–30897.
- Bayesian nonparametric federated learning of neural networks. In International Conference on Machine Learning. PMLR, 7252–7261.
- A survey of incentive mechanism design for federated learning. IEEE Transactions on Emerging Topics in Computing (2021).
- Fedala: Adaptive local aggregation for personalized federated learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 37. 11237–11244.
- Incentive mechanism for horizontal federated learning based on reputation and reverse auction. In Proceedings of the Web Conference 2021. 947–956.
- Communication-efficient algorithms for statistical optimization. Advances in neural information processing systems 25 (2012).