Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Fair Classifiers via Min-Max F-divergence Regularization (2306.16552v1)

Published 28 Jun 2023 in cs.LG, cs.AI, cs.CY, cs.IT, and math.IT

Abstract: As ML based systems are adopted in domains such as law enforcement, criminal justice, finance, hiring and admissions, ensuring the fairness of ML aided decision-making is becoming increasingly important. In this paper, we focus on the problem of fair classification, and introduce a novel min-max F-divergence regularization framework for learning fair classification models while preserving high accuracy. Our framework consists of two trainable networks, namely, a classifier network and a bias/fairness estimator network, where the fairness is measured using the statistical notion of F-divergence. We show that F-divergence measures possess convexity and differentiability properties, and their variational representation make them widely applicable in practical gradient based training methods. The proposed framework can be readily adapted to multiple sensitive attributes and for high dimensional datasets. We study the F-divergence based training paradigm for two types of group fairness constraints, namely, demographic parity and equalized odds. We present a comprehensive set of experiments for several real-world data sets arising in multiple domains (including COMPAS, Law Admissions, Adult Income, and CelebA datasets). To quantify the fairness-accuracy tradeoff, we introduce the notion of fairness-accuracy receiver operating characteristic (FA-ROC) and a corresponding \textit{low-bias} FA-ROC, which we argue is an appropriate measure to evaluate different classifiers. In comparison to several existing approaches for learning fair classifiers (including pre-processing, post-processing and other regularization methods), we show that the proposed F-divergence based framework achieves state-of-the-art performance with respect to the trade-off between accuracy and fairness.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (27)
  1. A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR), 54(6):1–35, 2021.
  2. Learning fair representations. In International conference on machine learning, pages 325–333. PMLR, 2013.
  3. Fairness constraints: Mechanisms for fair classification. In Artificial Intelligence and Statistics, pages 962–970. PMLR, 2017.
  4. Machine bias: There’s software used across the country to predict future criminals. and it’s biased against blacks. ProPublica, May 2016.
  5. Algorithmic bias? an empirical study of apparent gender-based discrimination in the display of stem career ads. Management science, 65(7):2966–2981, 2019.
  6. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference, pages 214–226, 2012.
  7. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th international conference on world wide web, pages 1171–1180, 2017.
  8. Counterfactual fairness. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017.
  9. Big data’s disparate impact. Calif. L. Rev., 104:671, 2016.
  10. Certifying and removing disparate impact. In proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pages 259–268, 2015.
  11. Equality of opportunity in supervised learning. Advances in neural information processing systems, 29, 2016.
  12. Training individually fair ml models with sensitive subspace robustness. In International Conference on Learning Representations, 2020.
  13. Avoiding discrimination through causal reasoning. Advances in neural information processing systems, 30, 2017.
  14. A fair classifier using mutual information. In 2020 IEEE International Symposium on Information Theory (ISIT), pages 2521–2526. IEEE, 2020.
  15. On fairness and calibration. Advances in neural information processing systems, 30, 2017.
  16. A fair classifier using kernel density estimation. Advances in Neural Information Processing Systems, 33:15088–15099, 2020.
  17. Learning fair classifiers: A regularization-inspired approach. Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML), 2017.
  18. Fairness-aware learning for continuous attributes and treatments. In International Conference on Machine Learning, pages 4382–4391. PMLR, 2019.
  19. Estimating divergence functionals and the likelihood ratio by convex risk minimization. IEEE Transactions on Information Theory, 56(11):5847–5861, 2010.
  20. Adult data set. UCI Machine Learning Repository, 2017.
  21. Linda F. Wightman. LSAC National Longitudinal Bar Passage Study. LSAC research report series. 1998.
  22. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015.
  23. Scikit-learn: Machine learning in python. the Journal of machine Learning research, 12:2825–2830, 2011.
  24. Self-normalizing neural networks. Advances in neural information processing systems, 30, 2017.
  25. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  26. Under-sampling approaches for improving prediction of the minority class in an imbalanced dataset. In Intelligent Control and Automation: International Conference on Intelligent Computing, ICIC 2006 Kunming, China, August 16–19, 2006, pages 731–740. Springer, 2006.
  27. Density ratio estimation in machine learning. Cambridge University Press, 2012.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Meiyu Zhong (6 papers)
  2. Ravi Tandon (74 papers)
Citations (4)
X Twitter Logo Streamline Icon: https://streamlinehq.com