Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Applying Intrinsic Debiasing on Downstream Tasks: Challenges and Considerations for Machine Translation (2406.00787v1)

Published 2 Jun 2024 in cs.CL

Abstract: Most works on gender bias focus on intrinsic bias -- removing traces of information about a protected group from the model's internal representation. However, these works are often disconnected from the impact of such debiasing on downstream applications, which is the main motivation for debiasing in the first place. In this work, we systematically test how methods for intrinsic debiasing affect neural machine translation models, by measuring the extrinsic bias of such systems under different design choices. We highlight three challenges and mismatches between the debiasing techniques and their end-goal usage, including the choice of embeddings to debias, the mismatch between words and sub-word tokens debiasing, and the effect on different target languages. We find that these considerations have a significant impact on downstream performance and the success of debiasing.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Bar Iluz (2 papers)
  2. Yanai Elazar (44 papers)
  3. Asaf Yehudai (16 papers)
  4. Gabriel Stanovsky (61 papers)

Summary

We haven't generated a summary for this paper yet.