Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Implications of the AI Act for Non-Discrimination Law and Algorithmic Fairness (2403.20089v2)

Published 29 Mar 2024 in cs.AI

Abstract: The topic of fairness in AI, as debated in the FATE (Fairness, Accountability, Transparency, and Ethics in AI) communities, has sparked meaningful discussions in the past years. However, from a legal perspective, particularly from the perspective of European Union law, many open questions remain. Whereas algorithmic fairness aims to mitigate structural inequalities at design-level, European non-discrimination law is tailored to individual cases of discrimination after an AI model has been deployed. The AI Act might present a tremendous step towards bridging these two approaches by shifting non-discrimination responsibilities into the design stage of AI models. Based on an integrative reading of the AI Act, we comment on legal as well as technical enforcement problems and propose practical implications on bias detection and bias correction in order to specify and comply with specific technical requirements.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (47)
  1. Machine Bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. In Ethics of data and analytics, Kirsten Martin (Ed.). CRC Press Taylor & Francis Group, Boca Raton and London and New York, 254–264.
  2. Fairness and Machine Learning: Limitations and Opportunities. fairmlbook.org.
  3. Solon Barocas and Andrew D. Selbst. 2016. Big Data’s Disparate Impact. California Law Review 104 (2016), 671–732.
  4. Evaluation des Allgemeinen Gleichbehandlungsgesetzes - erstellt im Auftrag der Antidiskriminierungsstelle des Bundes vom Büro für Recht und Wissenschaft GbR mit wissenschaftlicher Begleitung von Prof. Dr. Christiane Brors. Nomos, Baden-Baden.
  5. Reuben Binns. 2018. Fairness in Machine Learning: Lessons from Political Philosophy. Conference on Fairness, Accountability and Transparency 81 (2018), 149–159.
  6. Reuben Binns. 2020. On the apparent conflict between individual and group fairness. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. ACM.
  7. Fairlearn: A toolkit for assessing and improving fairness in AI. https://www.microsoft.com/en-us/research/publication/fairlearn-a-toolkit-for-assessing-and-improving-fairness-in-ai/
  8. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Proceedings of the 30th International Conference on Neural Information Processing Systems (Barcelona, Spain) (NIPS’16). Curran Associates Inc., Red Hook, NY, USA, 4356–4364.
  9. Joy Buolamwini and Timnit Gebru. 2018. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency, Vol. 81. PMLR, 77–91.
  10. Women also Snowboard: Overcoming Bias in Captioning Models. https://arxiv.org/pdf/1803.09797.pdf
  11. Jenna Burrell. 2016. How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society 3, 1 (2016), 1–11.
  12. Semantics derived automatically from language corpora contain human-like biases. Science 356, 6334 (2017), 183–186.
  13. Fairness Testing: A Comprehensive Survey and Analysis of Trends. ACM Transactions on Software Engineering and Methodology (2023).
  14. Alexandra Chouldechova and Aaron Roth. 2018. The Frontiers of Fairness in Machine Learning. Communications of the ACM 63, 5 (2018).
  15. Sam Corbett-Davies and Sharad Goel. 2018. The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning. http://arxiv.org/pdf/1808.00023v2
  16. Jeffrey Dastin. 2022. Amazon Scraps Secret AI Recruiting Tool that Showed Bias against Women *. In Ethics of data and analytics, Kirsten Martin (Ed.). CRC Press Taylor & Francis Group, Boca Raton and London and New York, 296–299.
  17. A Critical Survey on Fairness Benefits of XAI. http://arxiv.org/pdf/2310.13007.pdf
  18. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference on - ITCS ’12. ACM Press.
  19. European Commission. Directorate General for Communications Networks, Content and Technology. and High Level Expert Group on Artificial Intelligence. 2019. Ethics guidelines for trustworthy AI. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
  20. AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds and Machines 28, 4 (2018), 689–707.
  21. Sandra Fredman. 2011. Discrimination Law (2 ed.). Oxford University Press, Oxford.
  22. The (Im)possibility of fairness. Communications of the ACM 64, 4 (2021), 136–143.
  23. Predictably Unequal? The Effects of Machine Learning on Credit Markets. The Journal of Finance 77, 1 (2022), 5–47.
  24. Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of Sciences 115, 16 (2018), E3635–E3644.
  25. Philipp Hacker. 2018. Teaching fairness to artificial intelligence: Existing and novel strategies against algorithmic discrimination under EU law. Common Market Law Review 55, Issue 4 (2018), 1143–1185.
  26. Amazon SageMaker Clarify: Machine Learning Bias Detection and Explainability in the Cloud. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2021).
  27. Legal perspective on possible fairness measures – A legal discussion using the example of hiring decisions. Computer Law & Security Review 42, Article 105583 (2021), 1–20.
  28. Deborah Hellman. 2020. Measuring Algorithmic Fairness. Virginia Law Review 106 (2020), 811–866.
  29. Bias Mitigation for Machine Learning Classifiers: A Comprehensive Survey. ACM Journal on Responsible Computing (2023).
  30. Michael Kearns and Aaron Roth. 2019. The Ethical Algorithm - The Science of Socially Aware Algorithm Design. Oxford University Press, New York.
  31. Compatibility of Fairness Metrics with EU Non-Discrimination Laws: Demographic Parity & Conditional Demographic Disparity. https://arxiv.org/pdf/2306.08394.pdf
  32. Counterfactual Fairness. In 31st Conference on Neural Information Processing Systems.
  33. David Martens. 2022. Data Science Ethics - Concepts, Techniques, and Cautionary Tales. Oxford University Press, New York.
  34. Brent Mittelstadt. 2019. Principles alone cannot guarantee ethical AI. Nature Machine Intelligence 1, 11 (2019), 501–507.
  35. The ethics of algorithms: Mapping the debate. Big Data & Society 3, 2 (2016), 1–21.
  36. Arvind Narayanan. 2022. The limits of the quantitative approach to discrimination. https://www.cs.princeton.edu/~arvindn/talks/baldwin-discrimination/baldwin-discrimination-transcript.pdf
  37. Bias and Unfairness in Machine Learning Models: A Systematic Review on Datasets, Tools, Fairness Metrics, and Identification and Mitigation Methods. Big Data and Cognitive Computing 7, 1 (2023), 15.
  38. Dana Pessach and Erez Shmueli. 2022. A Review on Fairness in Machine Learning. ACM Comput. Surv. 55, 3, Article 51 (feb 2022), 44 pages. https://doi.org/10.1145/3494672
  39. No Classification without Representation: Assessing Geodiversity Issues in Open Data Sets for the Developing World. https://arxiv.org/pdf/1711.08536.pdf
  40. Indra Spiecker gen. Döhmann and Emanuel Towfigh. 2023. Automatisch Benachteiligt. Das Allgemeine Gleichbehandlungsgesetz und der Schutz vor Diskriminierung durch algorithmische Entscheidungssysteme. Rechtsgutachten im Auftrag der Antidiskriminierungsstelle des Bundes. Antidiskriminierungsstelle des Bundes, Berlin.
  41. Sahil Verma and Julia Rubin. 2018. Fairness definitions explained. In Proceedings of the International Workshop on Software Fairness (ACM Conferences). ACM, 1–7.
  42. Bias Preservation in Machine Learning: The Legality of Fairness Metrics Under EU Non-Discrimination Law. West Virginia Law Review 123, Issue 3 (2021), 735–790.
  43. Why fairness cannot be automated: Bridging the gap between EU non-discrimination law and AI. Computer Law & Security Review 41, Article 105567 (2021), 1–31.
  44. Algorithmic Unfairness through the Lens of EU Non-Discrimination Law: Or Why the Law is not a Decision Tree. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency.
  45. Matching code and law: achieving algorithmic fairness with optimal transport. Data Mining and Knowledge Discovery 34 (2020), 163–200.
  46. Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. 2979–2989.
  47. Indrė Žliobaitė. 2017. Measuring discrimination in algorithmic decision making. Data Mining and Knowledge Discovery 31 (2017), 1060–1089. https://doi.org/10.1007/s10618-017-0506-1
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Luca Deck (4 papers)
  2. Jan-Laurin Müller (1 paper)
  3. Conradin Braun (1 paper)
  4. Domenique Zipperling (6 papers)
  5. Niklas Kühl (94 papers)
Citations (1)