Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Does Gender Matter? Towards Fairness in Dialogue Systems (1910.10486v3)

Published 16 Oct 2019 in cs.CL and cs.AI

Abstract: Recently there are increasing concerns about the fairness of AI in real-world applications such as computer vision and recommendations. For example, recognition algorithms in computer vision are unfair to black people such as poorly detecting their faces and inappropriately identifying them as "gorillas". As one crucial application of AI, dialogue systems have been extensively applied in our society. They are usually built with real human conversational data; thus they could inherit some fairness issues which are held in the real world. However, the fairness of dialogue systems has not been well investigated. In this paper, we perform a pioneering study about the fairness issues in dialogue systems. In particular, we construct a benchmark dataset and propose quantitative measures to understand fairness in dialogue models. Our studies demonstrate that popular dialogue models show significant prejudice towards different genders and races. Besides, to mitigate the bias in dialogue systems, we propose two simple but effective debiasing methods. Experiments show that our methods can reduce the bias in dialogue systems significantly. The dataset and the implementation are released to foster fairness research in dialogue systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Haochen Liu (40 papers)
  2. Jamell Dacon (5 papers)
  3. Wenqi Fan (78 papers)
  4. Hui Liu (481 papers)
  5. Zitao Liu (76 papers)
  6. Jiliang Tang (204 papers)
Citations (134)