Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fairness-Accuracy Trade-Offs: A Causal Perspective (2405.15443v2)

Published 24 May 2024 in cs.LG, cs.AI, and stat.ML

Abstract: Systems based on machine learning may exhibit discriminatory behavior based on sensitive characteristics such as gender, sex, religion, or race. In light of this, various notions of fairness and methods to quantify discrimination were proposed, leading to the development of numerous approaches for constructing fair predictors. At the same time, imposing fairness constraints may decrease the utility of the decision-maker, highlighting a tension between fairness and utility. This tension is also recognized in legal frameworks, for instance in the disparate impact doctrine of Title VII of the Civil Rights Act of 1964 -- in which specific attention is given to considerations of business necessity -- possibly allowing the usage of proxy variables associated with the sensitive attribute in case a high-enough utility cannot be achieved without them. In this work, we analyze the tension between fairness and accuracy from a causal lens for the first time. We introduce the notion of a path-specific excess loss (PSEL) that captures how much the predictor's loss increases when a causal fairness constraint is enforced. We then show that the total excess loss (TEL), defined as the difference between the loss of predictor fair along all causal pathways vs. an unconstrained predictor, can be decomposed into a sum of more local PSELs. At the same time, enforcing a causal constraint often reduces the disparity between demographic groups. Thus, we introduce a quantity that summarizes the fairness-utility trade-off, called the causal fairness/utility ratio, defined as the ratio of the reduction in discrimination vs. the excess loss from constraining a causal pathway. This quantity is suitable for comparing the fairness-utility trade-off across causal pathways. Finally, as our approach requires causally-constrained fair predictors, we introduce a new neural approach for causally-constrained fair learning.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (46)
  1. Elston v. Talladega County Bd. of Educ. 997 F.2d 1394 (11th Cir. 1993), 1993. United States Court of Appeals for the Eleventh Circuit.
  2. C. R. Act. Civil rights act of 1964. Title VII, Equal Employment Opportunities, 1964.
  3. Machine bias: There’s software used across the country to predict future criminals. and it’s biased against blacks. ProPublica, 5 2016. URL https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
  4. On pearl’s hierarchy and the foundations of causal inference. In Probabilistic and Causal Inference: The Works of Judea Pearl, page 507–556. Association for Computing Machinery, New York, NY, USA, 1st edition, 2022.
  5. S. Barocas and A. D. Selbst. Big data’s disparate impact. Calif. L. Rev., 104:671, 2016.
  6. Fairness in criminal justice risk assessments: The state of the art. Sociological Methods & Research, 50(1):3–44, 2021.
  7. The gender earnings gap: learning from international comparisons. The American Economic Review, 82(2):533–538, 1992.
  8. The gender wage gap: Extent, trends, and explanations. Journal of economic literature, 55(3):789–865, 2017.
  9. Evaluating the predictive validity of the compas risk and needs assessment system. Criminal Justice and Behavior, 36(1):21–40, 2009.
  10. J. Buolamwini and T. Gebru. Gender shades: Intersectional accuracy disparities in commercial gender classification. In S. A. Friedler and C. Wilson, editors, Proceedings of the 1st Conference on Fairness, Accountability and Transparency, volume 81 of Proceedings of Machine Learning Research, pages 77–91, NY, USA, 2018.
  11. S. Chiappa. Path-specific counterfactual fairness. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7801–7808, 2019.
  12. A. Chouldechova. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Technical Report arXiv:1703.00056, arXiv.org, 2017.
  13. Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd acm sigkdd international conference on knowledge discovery and data mining, pages 797–806, 2017.
  14. R. B. Darlington. Another look at “cultural fairness” 1. Journal of educational measurement, 8(2):71–82, 1971.
  15. Automated experiments on ad privacy settings: A tale of opacity, choice, and discrimination. Proceedings on Privacy Enhancing Technologies, 2015(1):92–112, Apr. 2015. doi: 10.1515/popets-2015-0007.
  16. Is there a trade-off between fairness and accuracy? a perspective using mismatched hypothesis testing. In International conference on machine learning, pages 2803–2813. PMLR, 2020.
  17. Equality of opportunity in supervised learning. Advances in neural information processing systems, 29:3315–3323, 2016.
  18. Consumer credit-risk models via machine-learning algorithms. Journal of Banking & Finance, 34(11):2767–2787, 2010.
  19. Avoiding discrimination through causal reasoning. arXiv preprint arXiv:1706.02744, 2017.
  20. D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  21. Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807, 2016.
  22. Counterfactual fairness. Advances in neural information processing systems, 30, 2017.
  23. How we analyzed the compas recidivism algorithm. ProPublica (5 2016), 9, 2016.
  24. Method and system for loan origination and underwriting, Oct. 23 2007. US Patent 7,287,008.
  25. There is no trade-off: enforcing fairness can improve accuracy, 2020.
  26. R. Nabi and I. Shpitser. Fair inference on outcomes. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018.
  27. Causal conceptions of fairness and their consequences. In International Conference on Machine Learning, pages 16848–16887. PMLR, 2022.
  28. D. Pager. The mark of a criminal record. American journal of sociology, 108(5):937–975, 2003.
  29. J. Pearl. Causality: Models, Reasoning, and Inference. Cambridge University Press, New York, 2000. 2nd edition, 2009.
  30. J. Pearl. Direct and indirect effects. In Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence, page 411–420, San Francisco, CA, USA, 2001. Morgan Kaufmann Publishers Inc.
  31. D. Plečko and E. Bareinboim. Causal fairness analysis: A causal toolkit for fair machine learning. Foundations and Trends® in Machine Learning, 17(3):304–589, 2024.
  32. D. Plecko and E. Bareinboim. Causal fairness for outcome control. Advances in Neural Information Processing Systems, 36, 2024a.
  33. D. Plecko and E. Bareinboim. Reconciling predictive and statistical parity: A causal approach. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38 (13), pages 14625–14632, 2024b.
  34. D. Plečko and N. Meinshausen. Fair data adaptation with quantile preservation. Journal of Machine Learning Research, 21:242, 2020.
  35. Empirical observation of negligible fairness–accuracy trade-offs in machine learning for public policy. Nature Machine Intelligence, 3(10):896–904, 2021.
  36. J. Sanburn. Facebook thinks some native american names are inauthentic. Time, Feb. 14 2015. URL http://time.com/3710203/facebook-native-american-names/.
  37. L. S. Shapley et al. A value for n-person games. Princeton University Press Princeton, 1953.
  38. L. Sweeney. Discrimination in online ad delivery. Technical Report 2208240, SSRN, Jan. 28 2013. URL http://dx.doi.org/10.2139/ssrn.2208240.
  39. L. T. Sweeney and C. Haney. The influence of race on sentencing: A meta-analytic review of experimental studies. Behavioral Sciences & the Law, 10(2):179–195, 1992.
  40. M. Taddeo and L. Floridi. How ai can be a force for good. Science, 361(6404):751–752, 2018.
  41. E. J. Topol. High-performance medicine: the convergence of human and artificial intelligence. Nature medicine, 25(1):44–56, 2019.
  42. Pc-fairness: A unified framework for measuring causality-based fairness. Advances in neural information processing systems, 32, 2019.
  43. I.-C. Yeh. Default of Credit Card Clients. UCI Machine Learning Repository, 2016. DOI: https://doi.org/10.24432/C55S3H.
  44. J. Zhang and E. Bareinboim. Equality of opportunity in classification: A causal approach. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 3671–3681, Montreal, Canada, 2018a. Curran Associates, Inc.
  45. J. Zhang and E. Bareinboim. Fairness in decision-making—the causal explanation formula. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018b.
  46. Partial counterfactual identification from observational and experimental data. In Proceedings of the 39th International Conference on Machine Learning, 2022.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Drago Plecko (12 papers)
  2. Elias Bareinboim (34 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets