FedFDP: Fairness-Aware Federated Learning with Differential Privacy (2402.16028v4)
Abstract: Federated learning (FL) is an emerging machine learning paradigm designed to address the challenge of data silos, attracting considerable attention. However, FL encounters persistent issues related to fairness and data privacy. To tackle these challenges simultaneously, we propose a fairness-aware federated learning algorithm called FedFair. Building on FedFair, we introduce differential privacy to create the FedFDP algorithm, which addresses trade-offs among fairness, privacy protection, and model performance. In FedFDP, we developed a fairness-aware gradient clipping technique to explore the relationship between fairness and differential privacy. Through convergence analysis, we identified the optimal fairness adjustment parameters to achieve both maximum model performance and fairness. Additionally, we present an adaptive clipping method for uploaded loss values to reduce privacy budget consumption. Extensive experimental results show that FedFDP significantly surpasses state-of-the-art solutions in both model performance and fairness.
- Deep Learning with Differential Privacy. CoRR abs/1607.00133 (2016). arXiv:1607.00133 http://arxiv.org/abs/1607.00133
- Federated learning based on dynamic regularization. arXiv preprint arXiv:2111.04263 (2021).
- Hypothesis testing interpretations and renyi differential privacy. In International Conference on Artificial Intelligence and Statistics. PMLR, 2496–2506.
- Automatic Clipping: Differentially Private Deep Learning Made Easier and Stronger. CoRR abs/2206.07136 (2022). https://doi.org/10.48550/arXiv.2206.07136 arXiv:2206.07136
- The secret sharer: Evaluating and testing unintended memorization in neural networks. In 28th USENIX Security Symposium (USENIX Security 19). 267–284.
- A game-theoretic framework for incentive mechanism design in federated learning. Federated Learning: Privacy and Incentive (2020), 205–222.
- Optimization with non-differentiable constraints with applications to fairness, recall, churn, and other goals. Journal of Machine Learning Research 20, 172 (2019), 1–59.
- Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference. 214–226.
- Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography: Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, March 4-7, 2006. Proceedings 3. Springer, 265–284.
- The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science 9, 3–4 (2014), 211–407.
- Vitaly Feldman. 2020. Does learning require memorization? a short tale about a long tail. In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing. 954–959.
- Adap DP-FL: Differentially Private Federated Learning with Adaptive Noise. In 2022 IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). IEEE, 656–663.
- Differentially Private Federated Learning: A Systematic Review. arXiv preprint arXiv:2405.08299 (2024).
- DPSUR: Accelerating Differentially Private Stochastic Gradient Descent Using Selective Update and Release. arXiv preprint arXiv:2311.14056 (2023).
- Enforcing fairness in private federated learning via the modified method of differential multipliers. In NeurIPS 2021 Workshop Privacy in Machine Learning.
- Privacy, accuracy, and model fairness trade-offs in federated learning. Computers & Security 122 (2022), 102907.
- The impact of differential privacy on model fairness in federated learning. In Network and System Security: 14th International Conference, NSS 2020, Melbourne, VIC, Australia, November 25–27, 2020, Proceedings 14. Springer, 419–430.
- Scaffold: Stochastic controlled averaging for federated learning. In International conference on machine learning. PMLR, 5132–5143.
- Constrained differentially private federated learning for low-bandwidth devices. In Uncertainty in Artificial Intelligence. PMLR, 1756–1765.
- Alex Krizhevsky. 2009. Learning Multiple Layers of Features from Tiny Images.
- Gradient-based learning applied to document recognition. Proc. IEEE 86, 11 (1998), 2278–2324.
- Preservation of the global knowledge by not-true distillation in federated learning. Advances in Neural Information Processing Systems 35 (2022), 38461–38474.
- Model-contrastive federated learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 10713–10722.
- A survey on federated learning systems: Vision, hype and reality for data privacy and protection. IEEE Transactions on Knowledge and Data Engineering 35, 4 (2021), 3347–3366.
- Ditto: Fair and robust federated learning through personalization. In International Conference on Machine Learning. PMLR, 6357–6368.
- Federated optimization in heterogeneous networks. Proceedings of Machine learning and systems 2 (2020), 429–450.
- Fair Resource Allocation in Federated Learning. (2020). https://openreview.net/forum?id=ByexElSYDr
- On the Convergence of FedAvg on Non-IID Data. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. https://openreview.net/forum?id=HJxNAnVtDS
- Ensemble distillation for robust model fusion in federated learning. Advances in Neural Information Processing Systems 33 (2020), 2351–2363.
- ALI-DPFL: Differentially Private Federated Learning with Adaptive Local Iterations. arXiv preprint arXiv:2308.10457 (2023).
- Collaborative fairness in federated learning. Federated Learning: Privacy and Incentive (2020), 189–204.
- Communication-efficient learning of deep networks from decentralized data. (2017), 1273–1282.
- A survey on bias and fairness in machine learning. ACM computing surveys (CSUR) 54, 6 (2021), 1–35.
- Exploiting Unintended Feature Leakage in Collaborative Learning. ieee symposium on security and privacy (2022).
- Ilya Mironov. 2017. Rényi Differential Privacy. ieee computer security foundations symposium (2017).
- Rényi Differential Privacy of the Sampled Gaussian Mechanism. arXiv: Learning (2019).
- Agnostic federated learning. In International Conference on Machine Learning. PMLR, 4615–4625.
- Federated learning meets fairness and differential privacy. In Neural Information Processing: 28th International Conference, ICONIP 2021, Sanur, Bali, Indonesia, December 8–12, 2021, Proceedings, Part VI 28. Springer, 692–699.
- Pascal Paillier. 1999. Public-key cryptosystems based on composite degree residuosity classes. In International conference on the theory and applications of cryptographic techniques. Springer, 223–238.
- Privfairfl: Privacy-preserving group fairness in federated learning. arXiv preprint arXiv:2205.11584 (2022).
- A Survey of Fairness-Aware Federated Learning. CoRR abs/2111.01872 (2021). arXiv:2111.01872 https://arxiv.org/abs/2111.01872
- Machine learning models that remember too much. (2017), 587–601.
- Sebastian U Stich. 2018. Local SGD converges fast and communicates little. arXiv preprint arXiv:1805.09767 (2018).
- Private federated learning with autotuned compression. In International Conference on Machine Learning. PMLR, 34668–34708.
- Tackling the objective inconsistency problem in heterogeneous federated optimization. Advances in neural information processing systems 33 (2020), 7611–7623.
- Adaptive federated learning in resource constrained edge computing systems. IEEE journal on selected areas in communications 37, 6 (2019), 1205–1221.
- Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 (2017).
- Normalized/Clipped SGD with Perturbation for Differentially Private Non-Convex Optimization. CoRR abs/2206.13033 (2022). https://doi.org/10.48550/arXiv.2206.13033 arXiv:2206.13033
- Andrew C Yao. 1982. Protocols for secure computations. In 23rd annual symposium on foundations of computer science (sfcs 1982). IEEE, 160–164.
- Zhiyuan Zhao and Gauri Joshi. 2022. A dynamic reweighting strategy for fair federated learning. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 8772–8776.
- Loss tolerant federated learning. arXiv preprint arXiv:2105.03591 (2021).
- Exploring the Practicality of Differentially Private Federated Learning: A Local Iteration Tuning Approach. IEEE Transactions on Dependable and Secure Computing (2023).
- Deep leakage from gradients. Advances in neural information processing systems 32 (2019).
- Data-free knowledge distillation for heterogeneous federated learning. In International conference on machine learning. PMLR, 12878–12889.
- Martin Zinkevich. 2003. Online Convex Programming and Generalized Infinitesimal Gradient Ascent. (2003), 928–936. http://www.aaai.org/Library/ICML/2003/icml03-120.php