Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Bias Mitigation Framework for Intersectional Subgroups in Neural Networks (2212.13014v1)

Published 26 Dec 2022 in cs.LG and cs.CY

Abstract: We propose a fairness-aware learning framework that mitigates intersectional subgroup bias associated with protected attributes. Prior research has primarily focused on mitigating one kind of bias by incorporating complex fairness-driven constraints into optimization objectives or designing additional layers that focus on specific protected attributes. We introduce a simple and generic bias mitigation approach that prevents models from learning relationships between protected attributes and output variable by reducing mutual information between them. We demonstrate that our approach is effective in reducing bias with little or no drop in accuracy. We also show that the models trained with our learning framework become causally fair and insensitive to the values of protected attributes. Finally, we validate our approach by studying feature interactions between protected and non-protected attributes. We demonstrate that these interactions are significantly reduced when applying our bias mitigation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Narine Kokhlikyan (15 papers)
  2. Bilal Alsallakh (11 papers)
  3. Fulton Wang (8 papers)
  4. Vivek Miglani (7 papers)
  5. Oliver Aobo Yang (1 paper)
  6. David Adkins (3 papers)
Citations (1)