2000 character limit reached
BeFair: Addressing Fairness in the Banking Sector (2102.02137v2)
Published 3 Feb 2021 in cs.LG and cs.CY
Abstract: Algorithmic bias mitigation has been one of the most difficult conundrums for the data science community and Machine Learning (ML) experts. Over several years, there have appeared enormous efforts in the field of fairness in ML. Despite the progress toward identifying biases and designing fair algorithms, translating them into the industry remains a major challenge. In this paper, we present the initial results of an industrial open innovation project in the banking sector: we propose a general roadmap for fairness in ML and the implementation of a toolkit called BeFair that helps to identify and mitigate bias. Results show that training a model without explicit constraints may lead to bias exacerbation in the predictions.
- Alessandro Castelnovo (7 papers)
- Riccardo Crupi (15 papers)
- Giulia Del Gamba (2 papers)
- Greta Greco (2 papers)
- Aisha Naseer (4 papers)
- Daniele Regoli (13 papers)
- Beatriz San Miguel Gonzalez (2 papers)