Survey on Fairness Notions and Related Tensions (2209.13012v2)
Abstract: Automated decision systems are increasingly used to take consequential decisions in problems such as job hiring and loan granting with the hope of replacing subjective human decisions with objective ML algorithms. However, ML-based decision systems are prone to bias, which results in yet unfair decisions. Several notions of fairness have been defined in the literature to capture the different subtleties of this ethical and social concept (e.g., statistical parity, equal opportunity, etc.). Fairness requirements to be satisfied while learning models created several types of tensions among the different notions of fairness and other desirable properties such as privacy and classification accuracy. This paper surveys the commonly used fairness notions and discusses the tensions among them with privacy and accuracy. Different methods to address the fairness-accuracy trade-off (classified into four approaches, namely, pre-processing, in-processing, post-processing, and hybrid) are reviewed. The survey is consolidated with experimental analysis carried out on fairness benchmark datasets to illustrate the relationship between fairness measures and accuracy in real-world scenarios.
- A reductions approach to fair classification, in: International Conference on Machine Learning, PMLR. pp. 60–69.
- Fair regression: Quantitative definitions and reduction-based algorithms, in: International Conference on Machine Learning, PMLR. pp. 120–129.
- Trade-Offs between Fairness and Privacy in Machine Learning. Master’s thesis. University of Waterloo. Canada.
- Reducing unintended bias of ML models on tabular and textual data. The 8th IEEE International Conference on Data Science and Advanced Analytics , 1–10.
- Studying the impact of feature importance and weighted aggregation in tackling process fairness, in: Statistical Modeling and Applications on Real-Time Problems (to appear), CRC Press.
- Machine bias. propublica. See https://www. propublica. org/article/machine-bias-risk-assessments-in-criminal-sentencing .
- Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information fusion 58, 82–115.
- Fairness in machine learning. NIPS Tutorial .
- Fairness and Machine Learning. fairmlbook.org. http://www.fairmlbook.org.
- Learning fair classifiers: A regularization-inspired approach. CoRR abs/1707.00044. URL: http://arxiv.org/abs/1707.00044.
- Fairness in criminal justice risk assessments: The state of the art. Sociological Methods & Research 50, 3–44.
- LimeOut: An Ensemble Approach To Improve Process Fairness, in: ECML PKDD Int. Workshop XKDD, pp. 475–491.
- Causal discovery for fairness, in: Proceedings of the Workshop on Algorithmic Fairness through the Lens of Causality and Privacy, PMLR. pp. 7–22. URL: https://proceedings.mlr.press/v214/binkyte23a.html.
- Structural equations with latent variables wiley. New York .
- Building classifiers with independency constraints, in: 2009 IEEE International Conference on Data Mining Workshops, IEEE. pp. 13–18.
- Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data 5, 153–163.
- Emergent unfairness in algorithmic fairness-accuracy trade-off research, in: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pp. 46–54.
- The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv preprint arXiv:1808.00023 .
- Algorithmic decision making and the cost of fairness, in: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 797–806.
- On the compatibility of privacy and fairness, in: Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization, pp. 309–315.
- Compas risk scales: Demonstrating accuracy equity and predictive parity. Northpointe Inc .
- Differential privacy, in: International Colloquium on Automata, Languages, and Programming, Springer. pp. 1–12.
- Fairness through awareness, in: Proceedings of the 3rd innovations in theoretical computer science conference, pp. 214–226.
- Differential privacy and fairness in decisions and learning tasks: A survey. arXiv preprint arXiv:2202.08187 .
- A confidence-based approach for balancing fairness and accuracy, in: Proceedings of the 2016 SIAM international conference on data mining, SIAM. pp. 144–152.
- On the (im) possibility of fairness. arXiv preprint arXiv:1609.07236 .
- A comparative study of fairness-enhancing interventions in machine learning, in: ACM FAT, p. 329–338.
- On formalizing fairness in prediction with machine learning. arXiv preprint arXiv:1710.03184 .
- The case for process fairness in learning: Feature selection for fair decision making, in: NIPS Symposium on Machine Learning and the Law, p. 2.
- Equality of opportunity in supervised learning. Advances in neural information processing systems 29, 3315–3323.
- Randomization and Social Policy Evaluation Revisited. Working Paper 107. National Bureau of Economic Research. URL: http://www.nber.org/papers/t0107, doi:10.3386/t0107.
- Fae: A fairness-aware ensemble framework, in: 2019 IEEE International Conference on Big Data (Big Data), IEEE. pp. 1375–1380.
- Adafair: Cumulative fairness adaptive boosting, in: Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pp. 781–790.
- Measurement and fairness, in: Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pp. 375–385.
- Decision theory for discrimination-aware classification, in: 2012 IEEE 12th International Conference on Data Mining, IEEE. pp. 924–929.
- Quantifying explainable discrimination and removing illegal discrimination in automated decision making. Knowledge and information systems (Print) 35, 613–644.
- Fairness-aware classifier with prejudice remover regularizer, in: Joint European conference on machine learning and knowledge discovery in databases, Springer. pp. 35–50.
- Fairness through computationally-bounded awareness, in: Advances in Neural Information Processing Systems, pp. 4842–4852.
- Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807 .
- Inherent Trade-Offs in the Fair Determination of Risk Scores, in: Papadimitriou, C.H. (Ed.), 8th Innovations in Theoretical Computer Science Conference (ITCS 2017), Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, Dagstuhl, Germany. pp. 43:1–43:23. URL: http://drops.dagstuhl.de/opus/volltexte/2017/8156, doi:10.4230/LIPIcs.ITCS.2017.43.
- Fairness in credit scoring: Assessment, implementation and profit implications. European Journal of Operational Research 297, 1083–1094.
- Counterfactual fairness, in: Advances in Neural Information Processing Systems, pp. 4066–4076.
- Algorithmic fairness in mortgage lending: from absolute conditions to relational trade-offs. Minds and Machines 31, 165–191.
- Does mitigating ml’s impact disparity require treatment disparity?, in: Advances in Neural Information Processing Systems, pp. 8125–8135.
- A unified approach to interpreting model predictions, in: NIPS, pp. 4765–4774.
- Survey on causal-based machine learning fairness notions. arXiv preprint arXiv:2010.09553 .
- On the applicability of machine learning fairness notions. SIGKDD Explor. 23, 14–23.
- A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR) 54, 1–35.
- The cost of fairness in binary classification, in: Conference on Fairness, Accountability and Transparency, PMLR. pp. 107–118.
- Prediction-based decisions and fairness: A catalogue of choices, assumptions, and definitions. arXiv preprint arXiv:1811.07867 .
- Mitigating bias in algorithmic systems-a fish-eye view. ACM Computing Surveys (CSUR) .
- Causality. Cambridge university press.
- Causal inference in statistics: A primer. John Wiley & Sons.
- Improving fairness of artificial intelligence algorithms in privileged-group selection bias data settings. Expert Systems with Applications 185, 115667.
- A review on fairness in machine learning. ACM Computing Surveys (CSUR) 55, 1–44.
- Fair decision making using privacy-protected data, in: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 189–199.
- “Why Should I Trust You?”: Explaining the predictions of any classifier, in: ACM SIGKDD, pp. 1135–1144.
- Fairbatch: Batch selection for model fairness, in: International Conference on Learning Representations.
- The problem of infra-marginality in outcome tests for discrimination. The Annals of Applied Statistics 11, 1193–1216.
- The ethics of algorithms: key problems and solutions. AI & SOCIETY 37, 215–230.
- Fairness definitions explained, in: 2018 IEEE/ACM International Workshop on Software Fairness (FairWare), IEEE. pp. 1–7.
- Unlocking fairness: a trade-off revisited. Advances in neural information processing systems 32.
- Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment, in: Proceedings of the 26th international conference on world wide web, pp. 1171–1180.
- Learning fair representations, in: International Conference on Machine Learning, pp. 325–333.
- Mitigating unwanted biases with adversarial learning, in: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 335–340.
- On the relation between accuracy and fairness in binary classification. arXiv preprint arXiv:1505.05723 .
- A survey on measuring indirect discrimination in machine learning. arXiv preprint arXiv:1511.00148 .
- Guilherme Alves (3 papers)
- Fabien Bernier (5 papers)
- Miguel Couceiro (61 papers)
- Catuscia Palamidessi (68 papers)
- Sami Zhioua (10 papers)
- karima Makhlouf (8 papers)