Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Post-hoc Bias Scoring Is Optimal For Fair Classification (2310.05725v3)

Published 9 Oct 2023 in stat.ML and cs.LG

Abstract: We consider a binary classification problem under group fairness constraints, which can be one of Demographic Parity (DP), Equalized Opportunity (EOp), or Equalized Odds (EO). We propose an explicit characterization of Bayes optimal classifier under the fairness constraints, which turns out to be a simple modification rule of the unconstrained classifier. Namely, we introduce a novel instance-level measure of bias, which we call bias score, and the modification rule is a simple linear rule on top of the finite amount of bias scores.Based on this characterization, we develop a post-hoc approach that allows us to adapt to fairness constraints while maintaining high accuracy. In the case of DP and EOp constraints, the modification rule is thresholding a single bias score, while in the case of EO constraints we are required to fit a linear modification rule with 2 parameters. The method can also be applied for composite group-fairness criteria, such as ones involving several sensitive attributes.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (37)
  1. A reductions approach to fair classification. In International conference on machine learning, pp.  60–69. PMLR, 2018.
  2. Machine bias: There’s software used across the country to 272 predict future criminals. And it’s biased against blacks. 2015. URL https://www.propublica.org/article/machine-bias-risk-assessments-incriminal-sentencing.
  3. Are university admissions academically fair? Review of Economics and Statistics, 99(3):449–464, 2017.
  4. Theory of classification: A survey of some recent advances. ESAIM: probability and statistics, 9:323–375, 2005.
  5. Stability and generalization. The Journal of Machine Learning Research, 2:499–526, 2002.
  6. Fairness in machine learning: A survey. ACM Computing Surveys, 2023.
  7. Alexandra Chouldechova. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data, 5(2):153–163, 2017.
  8. Optimization with non-differentiable constraints with applications to fairness, recall, churn, and other goals. J. Mach. Learn. Res., 20(172):1–59, 2019.
  9. Fairness guarantee in multi-class classification. arXiv preprint arXiv:2109.13642, 2021.
  10. Empirical risk minimization under fairness constraints. Advances in neural information processing systems, 31, 2018.
  11. Fair learning with wasserstein barycenters for non-decomposable performance measures. In International Conference on Artificial Intelligence and Statistics, pp.  2436–2459. PMLR, 2023.
  12. Equality of opportunity in supervised learning. In The Conference on Neural Information Processing Systems (NeurIPS), 2016a.
  13. Train faster, generalize better: Stability of stochastic gradient descent. In International conference on machine learning, pp.  1225–1234. PMLR, 2016b.
  14. Deep residual learning for image recognition. In IEEE conference on computer vision and pattern recognition (CVPR), 2016.
  15. Regularization matters: A nonparametric perspective on overparametrized neural network. In International Conference on Artificial Intelligence and Statistics, pp.  829–837. PMLR, 2021.
  16. Group-aware threshold adaptation for fair classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pp.  6988–6995, 2022.
  17. Wasserstein fair classification. In Conference on Uncertainty in Artificial Intelligence, 2019.
  18. Stability and Deviation Optimal Risk Bounds with Convergence Rate O⁢(1/n)𝑂1𝑛O(1/n)italic_O ( 1 / italic_n ). Advances in Neural Information Processing Systems, 34:5065–5076, 2021.
  19. Ron Kohavi. Scaling up the accuracy of naive-bayes classifiers: A decision-tree hybrid. Knowledge Discovery and Data Mining, 1996.
  20. Deep learning face attributes in the wild. In International Conference on Computer Vision (ICCV), 2015.
  21. A unified approach to interpreting model predictions. Advances in neural information processing systems, 2017.
  22. Smooth discrimination analysis. The Annals of Statistics, 27(6):1808–1829, 1999.
  23. The cost of fairness in binary classification. In Conference on Fairness, accountability and transparency, pp.  107–118. PMLR, 2018.
  24. Fair contrastive learning for facial attribute classification. In The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
  25. Learning transferable visual models from natural language supervision. In International conference on machine learning, pp.  8748–8763. PMLR, 2021.
  26. Fair attribute classification through latent space de-biasing. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  9301–9310, 2021.
  27. Robust fairness under covariate shift. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp.  9419–9427, 2021.
  28. Alexandre B. Tsybakov. Introduction to Nonparametric Estimation. Springer, 2009. ISBN 978-0-387-79051-0.
  29. Vladimir Vapnik. Statistical learning theory. John Wiley & Sons, Chichester. 1998.
  30. Fair classification with group-dependent label noise. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pp.  526–536, 2021.
  31. Understanding instance-level impact of fairness constraints. In International Conference on Machine Learning, pp.  23114–23130. PMLR, 2022.
  32. Fair and optimal classification via post-processing. In International Conference on Machine Learning, pp.  37977–38012. PMLR, 2023.
  33. Understanding unfairness via training concept influence. arXiv preprint arXiv:2306.17828, 2023.
  34. Fairness constraints: Mechanisms for fair classification. In Artificial intelligence and statistics, pp.  962–970. PMLR, 2017.
  35. Fairness constraints: A flexible approach for fair classification. The Journal of Machine Learning Research, 20(1):2737–2778, 2019.
  36. Bayes-optimal classifiers under group fairness. arXiv preprint arXiv:2202.09724, 2022.
  37. Weak proxies are sufficient and preferable for fairness with missing sensitive attributes. 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Wenlong Chen (15 papers)
  2. Yegor Klochkov (13 papers)
  3. Yang Liu (2253 papers)
Citations (3)