Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Can Ensembling Pre-processing Algorithms Lead to Better Machine Learning Fairness? (2212.02614v1)

Published 5 Dec 2022 in cs.LG, cs.AI, and cs.CY

Abstract: As ML systems get adopted in more critical areas, it has become increasingly crucial to address the bias that could occur in these systems. Several fairness pre-processing algorithms are available to alleviate implicit biases during model training. These algorithms employ different concepts of fairness, often leading to conflicting strategies with consequential trade-offs between fairness and accuracy. In this work, we evaluate three popular fairness pre-processing algorithms and investigate the potential for combining all algorithms into a more robust pre-processing ensemble. We report on lessons learned that can help practitioners better select fairness algorithms for their models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Khaled Badran (4 papers)
  2. Pierre-Olivier Côté (4 papers)
  3. Amanda Kolopanis (1 paper)
  4. Rached Bouchoucha (5 papers)
  5. Antonio Collante (1 paper)
  6. Diego Elias Costa (28 papers)
  7. Emad Shihab (34 papers)
  8. Foutse Khomh (140 papers)