Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reducing Unintended Bias of ML Models on Tabular and Textual Data (2108.02662v1)

Published 5 Aug 2021 in cs.LG, cs.AI, and cs.CY

Abstract: Unintended biases in ML models are among the major concerns that must be addressed to maintain public trust in ML. In this paper, we address process fairness of ML models that consists in reducing the dependence of models on sensitive features, without compromising their performance. We revisit the framework FixOut that is inspired in the approach "fairness through unawareness" to build fairer models. We introduce several improvements such as automating the choice of FixOut's parameters. Also, FixOut was originally proposed to improve fairness of ML models on tabular data. We also demonstrate the feasibility of FixOut's workflow for models on textual data. We present several experimental results that illustrate the fact that FixOut improves process fairness on different classification settings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Guilherme Alves (3 papers)
  2. Maxime Amblard (19 papers)
  3. Fabien Bernier (5 papers)
  4. Miguel Couceiro (61 papers)
  5. Amedeo Napoli (25 papers)
Citations (14)

Summary

We haven't generated a summary for this paper yet.