Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Mitigating Gender Bias for Neural Dialogue Generation with Adversarial Learning (2009.13028v2)

Published 28 Sep 2020 in cs.CL, cs.AI, and cs.LG

Abstract: Dialogue systems play an increasingly important role in various aspects of our daily life. It is evident from recent research that dialogue systems trained on human conversation data are biased. In particular, they can produce responses that reflect people's gender prejudice. Many debiasing methods have been developed for various NLP tasks, such as word embedding. However, they are not directly applicable to dialogue systems because they are likely to force dialogue models to generate similar responses for different genders. This greatly degrades the diversity of the generated responses and immensely hurts the performance of the dialogue models. In this paper, we propose a novel adversarial learning framework Debiased-Chat to train dialogue models free from gender bias while keeping their performance. Extensive experiments on two real-world conversation datasets show that our framework significantly reduces gender bias in dialogue models while maintaining the response quality. The implementation of the proposed framework is released.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Haochen Liu (40 papers)
  2. Wentao Wang (47 papers)
  3. Yiqi Wang (39 papers)
  4. Hui Liu (481 papers)
  5. Zitao Liu (76 papers)
  6. Jiliang Tang (204 papers)
Citations (68)