2000 character limit reached
Towards WinoQueer: Developing a Benchmark for Anti-Queer Bias in Large Language Models (2206.11484v2)
Published 23 Jun 2022 in cs.CL and cs.CY
Abstract: This paper presents exploratory work on whether and to what extent biases against queer and trans people are encoded in LLMs such as BERT. We also propose a method for reducing these biases in downstream tasks: finetuning the models on data written by and/or about queer people. To measure anti-queer bias, we introduce a new benchmark dataset, WinoQueer, modeled after other bias-detection benchmarks but addressing homophobic and transphobic biases. We found that BERT shows significant homophobic bias, but this bias can be mostly mitigated by finetuning BERT on a natural language corpus written by members of the LGBTQ+ community.
- Virginia K. Felkner (3 papers)
- Ho-Chun Herbert Chang (19 papers)
- Eugene Jang (10 papers)
- Jonathan May (76 papers)