Federated Fairness without Access to Sensitive Groups (2402.14929v1)
Abstract: Current approaches to group fairness in federated learning assume the existence of predefined and labeled sensitive groups during training. However, due to factors ranging from emerging regulations to dynamics and location-dependency of protected groups, this assumption may be unsuitable in many real-world scenarios. In this work, we propose a new approach to guarantee group fairness that does not rely on any predefined definition of sensitive groups or additional labels. Our objective allows the federation to learn a Pareto efficient global model ensuring worst-case group fairness and it enables, via a single hyper-parameter, trade-offs between fairness and utility, subject only to a group size constraint. This implies that any sufficiently large subset of the population is guaranteed to receive at least a minimum level of utility performance from the model. The proposed objective encompasses existing approaches as special cases, such as empirical risk minimization and subgroup robustness objectives from centralized machine learning. We provide an algorithm to solve this problem in federation that enjoys convergence and excess risk guarantees. Our empirical results indicate that the proposed approach can effectively improve the worst-performing group that may be present without unnecessarily hurting the average performance, exhibits superior or comparable performance to relevant baselines, and achieves a large set of solutions with different fairness-utility trade-offs.
- Derivative portfolio hedging based on cvar. New Risk Measures in Investment and Regulation: Wiley.
- Minmax fairness: from rawlsian theory of justice to solution for algorithmic bias. AI & SOCIETY, pages 1–14.
- Stability and generalization. The Journal of Machine Learning Research, 2:499–526.
- Convex optimization. Cambridge university press.
- Why is my classifier discriminatory?
- Stability and convergence trade-off of iterative optimization algorithms.
- Addressing algorithmic disparity and performance inconsistency in federated learning. Advances in Neural Information Processing Systems, 34:26091–26102.
- Deng, L. (2012). The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141–142.
- Distributionally robust federated averaging. Advances in neural information processing systems, 33:15111–15122.
- Convergent algorithms for (relaxed) minimax fairness. arXiv preprint arXiv:2011.03108, 32:35–95.
- Retiring adult: New datasets for fair machine learning. Advances in neural information processing systems, 34:6478–6490.
- Distributionally robust losses for latent covariate mixtures. Operations Research, 71(2):649–664.
- European-Commission (2018). Reform of eu data protection rules 2018. https://ec.europa.eu/commission/sites/beta-political/files/data-protection-factsheet-changes_en.pdf.
- Scalable annotation of fine-grained categories without experts. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI ’17, page 1877–1881, New York, NY, USA. Association for Computing Machinery.
- Proxy fairness.
- Train faster, generalize better: Stability of stochastic gradient descent. In Balcan, M. F. and Weinberger, K. Q., editors, Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 1225–1234, New York, New York, USA. PMLR.
- Fairness without demographics in repeated loss minimization. In International Conference on Machine Learning, pages 1929–1938. PMLR.
- Fair federated learning via bounded group loss.
- Federated learning meets multi-objective optimization. IEEE Transactions on Network Science and Engineering, 9(4):2039–2051.
- “you can’t fix what you can’t measure”: Privately measuring demographic performance disparities in federated learning. In Workshop on Algorithmic Fairness through the Lens of Causality and Privacy, pages 67–85. PMLR.
- Inherent trade-offs in the fair determination of risk scores. CoRR, abs/1609.05807.
- Federated optimization: Distributed machine learning for on-device intelligence. CoRR, abs/1610.02527.
- Federated learning: Strategies for improving communication efficiency. In NeurIPS Workshop on Private Multi-Party Machine Learning.
- Fairness without demographics through adversarially reweighted learning. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H., editors, Advances in Neural Information Processing Systems, volume 33, pages 728–740. Curran Associates, Inc.
- Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV).
- Federated Learning for Open Banking, pages 240–254. Springer International Publishing, Cham.
- Mancini, J. (2021). Data portability, interoperability and digital platform competition: Oecd background paper.
- Blind pareto fairness and subgroup robustness. In Meila, M. and Zhang, T., editors, Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 7492–7501. PMLR.
- Minimax pareto fairness: A multi objective perspective. Proceedings of machine learning research, 119:6755–6764.
- Federated learning of deep networks using model averaging. CoRR, abs/1602.05629.
- Miettinen, K. (2012). Nonlinear Multiobjective Optimization, volume 12. Springer Science & Business Media.
- Agnostic federated learning. In 36th International Conference on Machine Learning, ICML 2019, 36th International Conference on Machine Learning, ICML 2019, pages 8114–8124. International Machine Learning Society (IMLS). 36th International Conference on Machine Learning, ICML 2019 ; Conference date: 09-06-2019 Through 15-06-2019.
- Minimax demographic group fairness in federated learning. In 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’22, page 142–159, New York, NY, USA. Association for Computing Machinery.
- Peng, J.-M. (1999). A Smoothing Function and Its Applications, pages 293–316. Springer US, Boston, MA.
- The eicu collaborative research database, a freely available multi-center database for critical care research. Scientific Data, 5.
- Rawls, J. (2001). Justice as fairness: A restatement. Harvard University Press.
- Superquantile regression with applications to buffered reliability, uncertainty quantification, and conditional value-at-risk. European Journal of Operational Research, 234(1):140–154.
- Enforcing fairness in private federated learning via the modified method of differential multipliers.
- No subclass left behind: Fine-grained robustness in coarse-grained classification problems. Advances in Neural Information Processing Systems, 33:19339–19352.
- A field guide to federated optimization. CoRR, abs/2107.06917.
- The cancer genome atlas pan-cancer analysis project. Nature genetics, 45(10):1113–1120.
- Smooth sample average approximation of stationary points in nonsmooth stochastic optimization and applications. Math. Program., 119:371–401.
- GIFAIR-FL: an approach for group and individual fairness in federated learning. CoRR, abs/2108.02741.
- Zang, I. (1980). A smoothing-out technique for min–max optimization. Math. Program., 19(1):61–77.
- Unified group fairness on federated learning.
- Zhang, Y. (2018). Assessing fair lending risks using race/ethnicity proxies. Manage. Sci., 64(1):178–197.
- Afroditi Papadaki (5 papers)
- Natalia Martinez (10 papers)
- Martin Bertran (15 papers)
- Guillermo Sapiro (101 papers)
- Miguel Rodrigues (33 papers)