Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Compatibility of Fairness Metrics with EU Non-Discrimination Laws: Demographic Parity & Conditional Demographic Disparity (2306.08394v1)

Published 14 Jun 2023 in cs.CY and cs.LG

Abstract: Empirical evidence suggests that algorithmic decisions driven by Machine Learning (ML) techniques threaten to discriminate against legally protected groups or create new sources of unfairness. This work supports the contextual approach to fairness in EU non-discrimination legal framework and aims at assessing up to what point we can assure legal fairness through fairness metrics and under fairness constraints. For that, we analyze the legal notion of non-discrimination and differential treatment with the fairness definition Demographic Parity (DP) through Conditional Demographic Disparity (CDD). We train and compare different classifiers with fairness constraints to assess whether it is possible to reduce bias in the prediction while enabling the contextual approach to judicial interpretation practiced under EU non-discrimination laws. Our experimental results on three scenarios show that the in-processing bias mitigation algorithm leads to different performances in each of them. Our experiments and analysis suggest that AI-assisted decision-making can be fair from a legal perspective depending on the case at hand and the legal justification. These preliminary results encourage future work which will involve further case studies, metrics, and fairness notions.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (54)
  1. 1988. Watson v. Fort Worth Bank & Trust. , 977 pages.
  2. Doaa Abu-Elyounes. 2020. Contextual Fairness: A Legal And Policy Analysis Of Algorithmic Fairness. Journal of Law, Technology & Policy 20, 1 (2020), 1–54.
  3. Shreya Atrey. 2019. Intersectional discrimination. Oxford University Press, USA.
  4. Mark Bell. 2002. Anti-discrimination law and the European Union. (2002).
  5. AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias. https://arxiv.org/abs/1810.01943
  6. Fairness in criminal justice risk assessments: The state of the art. Sociological Methods & Research 50, 1 (2021), 3–44.
  7. Legal informatics and management of legislative documents. Global Center for ICT in Parliament Working Paper 2 (2008).
  8. Jess Bullock and Annick Masselot. 2012. Multiple discrimination and intersectional disadvantages: Challenges and opportunities in the European Union legal framework. Colum. J. Eur. L. 19 (2012), 57.
  9. Classification with fairness constraints: A meta-algorithm with provable guarantees. In Proceedings of the conference on fairness, accountability, and transparency. 319–328.
  10. Danielle Keats Citron and Frank Pasquale. 2014. The scored society: Due process for automated predictions. Wash. L. Rev. 89 (2014), 1.
  11. European Commission. 2019. Ethic Guidelines for Trustworthy AI. https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419
  12. Conscientious Classification: A Data Scientist’s Guide to Discrimination-Aware Classification. 5, 2 (2017), 120–134.
  13. Automated Experiments on Ad Privacy Settings: A Tale of Opacity, Choice, and Discrimination. 2015, 1 (2015), 92–112.
  14. Marc De Vos. 2020. The European Court of Justice and the march towards substantive equality in European Union anti-discrimination law. International Journal of Discrimination and the Law 20, 1 (2020), 62–87.
  15. Joerg Dietz and Emmanuelle P Kleinlogel. 2015. Employment discrimination as unethical behavior. In The Oxford Handbook of Workplace Discrimination. Oxford University Press Oxford.
  16. A sociotechnical view of algorithmic fairness. Information Systems Journal 32, 4 (July 2022), 754–818. https://doi.org/10.5167/uzh-207228
  17. Julia Dressel and Hany Farid. 2018. The accuracy, fairness, and limits of predicting recidivism. 4 (2018).
  18. Dheeru Dua and Casey Graff. 2017. UCI Machine Learning Repository. http://archive.ics.uci.edu/ml
  19. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference (New York, NY, USA) (ITCS ’12). Association for Computing Machinery, 214–226.
  20. Evelyn Ellis and Philippa Watson. 2012. EU anti-discrimination law. OUP Oxford.
  21. European Commission. 2021. Proposal for a Regulation Of The European Parliament And Of The Council Laying Down Harmonised Rules On Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts. COM/2021/206 final.
  22. Certifying and removing disparate impact. In proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. 259–268.
  23. Statistics. W. W. Norton & Company, W. W. Norton & Company.
  24. Batya Friedman and Helen Nissenbaum. 1996. Bias in computer systems. ACM Transactions on Information Systems (TOIS) 14, 3 (1996), 330–347.
  25. Philipp Hacker. 2018. Teaching fairness to artificial intelligence: existing and novel strategies against algorithmic discrimination under EU law. Common Market Law Review 55, 4 (2018).
  26. Who is the fairest of them all? Public attitudes and expectations regarding automated decision-making. Computer Law & Security Review 39 (2020), 105456.
  27. Bias Mitigation for Machine Learning Classifiers: A Comprehensive Survey. ArXiv (2022).
  28. Machine Bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  29. Quantifying explainable discrimination and removing illegal discrimination in automated decision making. Knowledge and information systems 35, 3 (2013), 613–644.
  30. Anthony Kelly. 2021. A tale of two algorithms: The appeal and repeal of calculated grades systems in England and Ireland in 2020. British Educational Research Journal 47, 3 (2021), 725–741.
  31. Algorithmic fairness. In Aea papers and proceedings, Vol. 108. 22–27.
  32. Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807 (2016).
  33. Anja Lambrecht and Catherine E. Tucker. 2018. Algorithmic Bias? An Empirical Study into Apparent Gender-Based Discrimination in the Display of STEM Career Ads.
  34. A survey on datasets for fairness-aware machine learning. 12, 3 (2022), e1452.
  35. Catharine A MacKinnon. 1991. Reflections on sex equality under law. Yale Law Journal (1991), 1281–1328.
  36. Co-designing checklists to understand organizational challenges and opportunities around fairness in AI. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–14.
  37. Timo Makkonen. 2007. Measuring discrimination: Data collection and EU equality law. Office for Official Publications of the European Communities.
  38. Gianclaudio Malgieri. 2020. The concept of fairness in the GDPR: a linguistic and contextual interpretation. In Proceedings of the 2020 Conference on fairness, accountability, and transparency. 154–166.
  39. Genetic Misdiagnoses and the Potential for Health Disparities. 375, 7 (2016), 655–665. https://doi.org/10.1056/NEJMsa1507092
  40. A Survey on Bias and Fairness in Machine Learning. 54, 6 (2021), 115:1–115:35.
  41. Algorithmic fairness: Choices, assumptions, and definitions. Annual Review of Statistics and Its Application 8 (2021), 141–163.
  42. Daniel Moeckli et al. 2010. Equality and non-discrimination. International human rights law (2010), 189–208.
  43. Managing bias in AI. In Companion Proceedings of The 2019 World Wide Web Conference. 539–544.
  44. Salvatore Sapienza. 2022. Group Pirvacy and Dietary Preferences. In Big Data, Algorithms and Food Safety: A Legal and Ethical Approach to Data Ownership and Data Governance. Springer, 106–108.
  45. Valerie Schneider. 2020. Locked out by big data: how big data algorithms and machine learning may undermine housing justice. Colum. Hum. Rts. L. Rev. 52 (2020), 251.
  46. Project SEAPHE. 2007. Project SEAPHE Law School Admissions. http://www.seaphe.org/databases.php
  47. Anastasia Siapka. 2018. The Ethical and Legal Challenges of Artificial Intelligence: The EU response to biased and discriminatory AI. Available at SSRN 3408773 (2018).
  48. Mathematical notions vs. human perception of fairness: A descriptive approach to fairness for machine learning. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining. 2459–2468.
  49. Sahil Verma and Julia Rubin. 2018. Fairness definitions explained. In 2018 ieee/acm international workshop on software fairness (fairware). IEEE, ACM, 1–7.
  50. Sandra Wachter. 2022. The theory of artificial immutability: Protecting algorithmic groups under anti-discrimination law. arXiv preprint arXiv:2205.01166 (2022).
  51. Bias preservation in machine learning: the legality of fairness metrics under EU non-discrimination law. W. Va. L. Rev. 123 (2020), 735.
  52. Why fairness cannot be automated: Bridging the gap between EU non-discrimination law and AI. Computer Law & Security Review 41 (2021), 105567.
  53. Yasaman Yousefi. 2022. Notions of Fairness in Automated Decision Making: An Interdisciplinary Approach to Open Issues. In Electronic Government and the Information Systems Perspective: 11th International Conference, EGOVIS 2022, August 22–24, 2022, Proceedings. Springer, Vienna, Austria, 3–17.
  54. Matching code and law  achieving algorithmic fairness with optimal transport. Data Mining and Knowledge Discovery 34 (2020), 163–200.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Lisa Koutsoviti Koumeri (4 papers)
  2. Magali Legast (1 paper)
  3. Yasaman Yousefi (1 paper)
  4. Koen Vanhoof (6 papers)
  5. Axel Legay (71 papers)
  6. Christoph Schommer (18 papers)
Citations (3)