Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distribution-Free Fair Federated Learning with Small Samples (2402.16158v2)

Published 25 Feb 2024 in stat.ML, cs.CY, and cs.LG

Abstract: As federated learning gains increasing importance in real-world applications due to its capacity for decentralized data training, addressing fairness concerns across demographic groups becomes critically important. However, most existing machine learning algorithms for ensuring fairness are designed for centralized data environments and generally require large-sample and distributional assumptions, underscoring the urgent need for fairness techniques adapted for decentralized and heterogeneous systems with finite-sample and distribution-free guarantees. To address this issue, this paper introduces FedFaiREE, a post-processing algorithm developed specifically for distribution-free fair learning in decentralized settings with small samples. Our approach accounts for unique challenges in decentralized environments, such as client heterogeneity, communication costs, and small sample sizes. We provide rigorous theoretical guarantees for both fairness and accuracy, and our experimental results further provide robust empirical validation for our proposed method.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (41)
  1. Federated learning for healthcare: Systematic review and architecture proposal. ACM Transactions on Intelligent Systems and Technology (TIST), 13(4):1–23, 2022.
  2. It’s compaslicated: The messy relationship between rai datasets and algorithmic fairness benchmarks. arXiv preprint arXiv:2106.05498, 2021.
  3. Fairness in machine learning: A survey. ACM Computing Surveys, 2020.
  4. Why is my classifier discriminatory? Advances in neural information processing systems, 31, 2018.
  5. A fair classifier using kernel density estimation. Advances in neural information processing systems, 33:15088–15099, 2020.
  6. Fedfair: Training fair models in cross-silo federated learning. arXiv preprint arXiv:2109.05662, 2021.
  7. Addressing algorithmic disparity and performance inconsistency in federated learning. In Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems, volume 34, pp.  26091–26102. Curran Associates, Inc., 2021.
  8. Compas risk scales: Demonstrating accuracy equity and predictive parity. Northpointe Inc, 7(4):1–36, 2016.
  9. Retiring adult: New datasets for fair machine learning. Advances in neural information processing systems, 34:6478–6490, 2021.
  10. Fairness-aware Agnostic Federated Learning, pp.  181–189. 2021.
  11. Uci machine learning repository. 2017.
  12. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference, pp.  214–226, 2012.
  13. Fairfed: Enabling group fairness in federated learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37:7494–7502, 06 2023.
  14. A confidence-based approach for balancing fairness and accuracy. In Proceedings of the 2016 SIAM international conference on data mining, pp.  144–152. SIAM, 2016.
  15. Satisfying real-world goals with dataset constraints. Advances in neural information processing systems, 29, 2016.
  16. Demystifying local and global fairness trade-offs in federated learning using information theory. In Federated Learning and Analytics in Practice: Algorithms, Systems, Applications, and Opportunities, 2023.
  17. Equality of opportunity in supervised learning. Advances in neural information processing systems, 29, 2016.
  18. Fair federated learning via bounded group loss. arXiv preprint arXiv:2203.10190, 2022.
  19. An efficiency-boosting client selection scheme for federated learning with fairness guarantee. IEEE Transactions on Parallel and Distributed Systems, 32(7):1552–1564, 2020.
  20. An algorithm for removing sensitive information. The Annals of Applied Statistics, 13(1):189–220, 2019.
  21. Federated learning for healthcare domain-pipeline, applications and challenges. ACM Transactions on Computing for Healthcare, 3(4):1–36, 2022.
  22. Fairee: fair classification with finite-sample and distribution-free guarantee. In The Eleventh International Conference on Learning Representations, 2022.
  23. Federated learning: Challenges, methods, and future directions. IEEE signal processing magazine, 37(3):50–60, 2020.
  24. Ditto: Fair and robust federated learning through personalization. In International Conference on Machine Learning, pp.  6357–6368. PMLR, 2021.
  25. Think locally, act globally: Federated learning with local and global representations. arXiv preprint arXiv:2001.01523, 2020.
  26. Simfair: A unified framework for fairness-aware multi-label classification. arXiv preprint arXiv:2302.09683, 2023.
  27. Federated conformal predictors for distributed uncertainty quantification. In International Conference on Machine Learning, pp.  22942–22964. PMLR, 2023.
  28. Quantiles over data streams: experimental comparisons, new analyses, and further improvements. The VLDB Journal, 25:449–472, 2016.
  29. Collaborative fairness in federated learning. Federated Learning: Privacy and Incentive, pp.  189–204, 2020.
  30. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pp.  1273–1282. PMLR, 2017.
  31. Minimax demographic group fairness in federated learning. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pp.  142–159, 2022.
  32. Conformal prediction for federated uncertainty quantification under label shift. In Proceedings of the 40th International Conference on Machine Learning, pp.  27907–27947. PMLR, 2023.
  33. Enforcing fairness in private federated learning via the modified method of differential multipliers. arXiv preprint arXiv:2109.08604, 2021.
  34. Medians and beyond: new aggregation techniques for sensor networks. In Proceedings of the 2nd international conference on Embedded networked sensor systems, pp.  239–249, 2004.
  35. ELSA: Efficient label shift adaptation through the lens of semiparametric models. In Proceedings of the 40th International Conference on Machine Learning, pp.  34120–34142. PMLR, 2023.
  36. Characterizing impacts of heterogeneity in federated learning upon large-scale smartphone data. In Proceedings of the Web Conference 2021, pp.  935–946, 2021.
  37. A fairness-aware incentive scheme for federated learning. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp.  393–399, 2020.
  38. Fairness constraints: A flexible approach for fair classification. The Journal of Machine Learning Research, 20(1):2737–2778, 2019.
  39. Learning fair representations. In International conference on machine learning, pp.  325–333. PMLR, 2013.
  40. Bayes-optimal classifiers under group fairness. arXiv preprint arXiv:2202.09724, 2022.
  41. Improving fairness via federated learning. arXiv preprint arXiv:2110.15545, 2021.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Qichuan Yin (1 paper)
  2. Junzhou Huang (137 papers)
  3. Huaxiu Yao (103 papers)
  4. Linjun Zhang (70 papers)
  5. Zexian Wang (3 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets