Bias in Data-driven AI Systems -- An Introductory Survey (2001.09762v1)
Abstract: AI-based systems are widely employed nowadays to make decisions that have far-reaching impacts on individuals and society. Their decisions might affect everyone, everywhere and anytime, entailing concerns about potential human rights issues. Therefore, it is necessary to move beyond traditional AI algorithms optimized for predictive performance and embed ethical and legal principles in their design, training and deployment to ensure social good while still benefiting from the huge potential of the AI technology. The goal of this survey is to provide a broad multi-disciplinary overview of the area of bias in AI systems, focusing on technical challenges and solutions as well as to suggest new research directions towards approaches well-grounded in a legal frame. In this survey, we focus on data-driven AI, as a large part of AI is powered nowadays by (big) data and powerful Machine Learning (ML) algorithms. If otherwise not specified, we use the general term bias to describe problems related to the gathering or processing of data that might result in prejudiced decisions on the bases of demographic features like race, sex, etc.
- Eirini Ntoutsi (49 papers)
- Pavlos Fafalios (20 papers)
- Ujwal Gadiraju (28 papers)
- Vasileios Iosifidis (18 papers)
- Wolfgang Nejdl (46 papers)
- Maria-Esther Vidal (39 papers)
- Salvatore Ruggieri (31 papers)
- Franco Turini (9 papers)
- Symeon Papadopoulos (74 papers)
- Emmanouil Krasanakis (8 papers)
- Ioannis Kompatsiaris (42 papers)
- Katharina Kinder-Kurlanda (6 papers)
- Claudia Wagner (37 papers)
- Fariba Karimi (44 papers)
- Miriam Fernandez (4 papers)
- Harith Alani (3 papers)
- Bettina Berendt (20 papers)
- Tina Kruegel (2 papers)
- Christian Heinze (1 paper)
- Klaus Broelemann (16 papers)