Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Learning on a Healthy Data Diet: Finding Important Examples for Fairness (2211.11109v2)

Published 20 Nov 2022 in cs.CL, cs.AI, cs.CY, and cs.LG

Abstract: Data-driven predictive solutions predominant in commercial applications tend to suffer from biases and stereotypes, which raises equity concerns. Prediction models may discover, use, or amplify spurious correlations based on gender or other protected personal characteristics, thus discriminating against marginalized groups. Mitigating gender bias has become an important research focus in NLP and is an area where annotated corpora are available. Data augmentation reduces gender bias by adding counterfactual examples to the training dataset. In this work, we show that some of the examples in the augmented dataset can be not important or even harmful for fairness. We hence propose a general method for pruning both the factual and counterfactual examples to maximize the model's fairness as measured by the demographic parity, equality of opportunity, and equality of odds. The fairness achieved by our method surpasses that of data augmentation on three text classification datasets, using no more than half of the examples in the augmented dataset. Our experiments are conducted using models of varying sizes and pre-training settings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Abdelrahman Zayed (9 papers)
  2. Prasanna Parthasarathi (23 papers)
  3. Goncalo Mordido (22 papers)
  4. Hamid Palangi (52 papers)
  5. Samira Shabanian (10 papers)
  6. Sarath Chandar (93 papers)
Citations (20)

Summary

We haven't generated a summary for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com