Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Auditing Gender Analyzers on Text Data (2310.06061v1)

Published 9 Oct 2023 in cs.CY and cs.CL

Abstract: AI models have become extremely popular and accessible to the general public. However, they are continuously under the scanner due to their demonstrable biases toward various sections of the society like people of color and non-binary people. In this study, we audit three existing gender analyzers -- uClassify, Readable and HackerFactor, for biases against non-binary individuals. These tools are designed to predict only the cisgender binary labels, which leads to discrimination against non-binary members of the society. We curate two datasets -- Reddit comments (660k) and, Tumblr posts (2.05M) and our experimental evaluation shows that the tools are highly inaccurate with the overall accuracy being ~50% on all platforms. Predictions for non-binary comments on all platforms are mostly female, thus propagating the societal bias that non-binary individuals are effeminate. To address this, we fine-tune a BERT multi-label classifier on the two datasets in multiple combinations, observe an overall performance of ~77% on the most realistically deployable setting and a surprisingly higher performance of 90% for the non-binary class. We also audit ChatGPT using zero-shot prompts on a small dataset (due to high pricing) and observe an average accuracy of 58% for Reddit and Tumblr combined (with overall better results for Reddit). Thus, we show that existing systems, including highly advanced ones like ChatGPT are biased, and need better audits and moderation and, that such societal biases can be addressed and alleviated through simple off-the-shelf models like BERT trained on more gender inclusive datasets.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (41)
  1. C. Richards, W. P. Bouman, L. Seal, M. J. Barker, T. O. Nieder, and G. T’Sjoen, “Non-binary or genderqueer genders,” International Review of Psychiatry, pp. 95–102, 2016.
  2. UK, “Gender recognition act 2004,” 2004. Accessed: 2023-01-31.
  3. L. S. Weinhardt, P. Stevens, H. Xie, L. M. Wesp, S. A. John, I. Apchemengich, D. Kioko, S. Chavez-Korell, K. M. Cochran, J. M. Watjen, et al., “Transgender and gender nonconforming youths’ public facilities use and psychological well-being: a mixed-method study,” Transgender health, pp. 140–150, 2017.
  4. B. P. Bagagli, T. V. Chaves, and M. G. Zoppi Fontana, “Trans women and public restrooms: The legal discourse and its violence,” Frontiers in Sociology, 2021.
  5. T. Bates, C. S. Thomas, and A. R. Timming, “Employment discrimination against gender diverse individuals in western australia,” Equality, Diversity and Inclusion: An International Journal, pp. 273–289, 2021.
  6. UN, “The struggle of trans and gender-diverse persons.” https://www.ohchr.org/en/special-procedures/ie-sexual-orientation-and-gender-identity/struggle-trans-and-gender-diverse-persons, 2021. Accessed: 2023-01-31.
  7. J. Buolamwini and T. Gebru, “Gender shades: Intersectional accuracy disparities in commercial gender classification,” in PMLR FAT*, 2018.
  8. S. Jaiswal, K. Duggirala, A. Dash, and A. Mukherjee, “Two-face: Adversarial audit of commercial face recognition systems,” AAAI ICWSM, pp. 381–392, 2022.
  9. T. Sühr, S. Hilgard, and H. Lakkaraju, “Does fair ranking improve minority outcomes? understanding the interplay of human and algorithmic biases in online hiring,” in AIES, pp. 989–999, 2021.
  10. Y. Feng and C. Shah, “Has ceo gender bias really been fixed? adversarial attacking and improving gender fairness in image search,” in AAAI, 2022.
  11. O. Keyes, “The misgendering machines: Trans/hci implications of automatic gender recognition,” CSCW, pp. 1–22, 2018.
  12. M. K. Scheuerman, J. M. Paul, and J. R. Brubaker, “How computers see gender: An evaluation of gender classification in commercial facial analysis services,” CSCW, pp. 1–33, 2019.
  13. S. Jaiswal and A. Mukherjee, “Marching with the pink parade: Evaluating visual search recommendations for non-binary clothing items,” CHI Extended Abstracts, 2022.
  14. Amazon, “Amazon aws rekognition.” https://aws.amazon.com/rekognition/faqs/, 2022. Accessed: 2023-01-31.
  15. Face++, “Face++ detect.” https://www.faceplusplus.com/face-detection/, 2022. Accessed: 2023-01-31.
  16. Clarifai, “Clarifai.” https://www.clarifai.com/models/ai-face-detection, 2022. Accessed: 2023-01-31.
  17. Microsoft, “Microsoft azure face.” https://azure.microsoft.com/en-in/services/cognitive-services/face/, 2022. Accessed: 2023-01-31.
  18. uClassify, “uclassify gender analyzer.” https://www.uclassify.com/browse/uclassify/genderanalyzer\_v5, 2022. Accessed: 2023-01-31.
  19. Readable, “Readable gender analyzer.” https://app.readable.com/text/gender/, 2022. Accessed: 2023-01-31.
  20. HackerFactor, “Hackerfactor gender guesser.” https://www.hackerfactor.com/GenderGuesser.php, 2022. Accessed: 2023-01-31.
  21. N. Cheng, R. Chandramouli, and K. Subbalakshmi, “Author gender identification from text,” Digital Investigation, pp. 78–88, 2011.
  22. S. Mukherjee and P. K. Bala, “Gender classification of microblog text based on authorial style,” ISeB, pp. 117–138, 2017.
  23. J. Dastin, “Amazon scraps secret ai recruiting tool that showed bias against women.” https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G, 2018. Accessed: 2022-05-31.
  24. B. Onikoyi, N. Nnamoko, and I. Korkontzelos, “Gender prediction with descriptive textual data using a machine learning approach,” Natural Language Processing Journal, vol. 4, p. 100018, 2023.
  25. C. Sandvig, K. Hamilton, K. Karahalios, and C. Langbort, “Auditing algorithms: Research methods for detecting discrimination on internet platforms,” Data and discrimination: converting critical concerns into productive inquiry, 2014.
  26. OpenAI, “Chatgpt.” https://chat.openai.com/, 2022. Accessed: 2023-01-31.
  27. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
  28. Reddit, “Reddit.” https://reddit.com. Accessed: 2023-01-31.
  29. Tumblr, “Tumblr.” https://www.tumblr.com/. Accessed: 2023-01-31.
  30. N. Mehrabi, F. Morstatter, N. Saxena, K. Lerman, and A. Galstyan, “A survey on bias and fairness in machine learning,” ACM CSUR, pp. 1–35, 2021.
  31. F. Safara, A. S. Mohammed, M. Yousif Potrus, S. Ali, Q. T. Tho, A. Souri, F. Janenia, and M. Hosseinzadeh, “An author gender detection method using whale optimization algorithm and artificial neural network,” IEEE Access, pp. 48428–48437, 2020.
  32. 2019.
  33. P. Vashisth and K. Meehan, “Gender classification using twitter text data,” in ISSC, 2020.
  34. A. F. Sotelo, H. Gómez-Adorno, O. Esquivel-Flores, and G. Bel-Enguix, “Gender identification in social media using transfer learning,” in Mexican Conference on Pattern Recognition, pp. 293–303, 2020.
  35. E. E. Abdallah, J. R. Alzghoul, and M. Alzghool, “Age and gender prediction in open domain text,” Procedia Computer Science, pp. 563–570, 2020.
  36. A. Angeles and M. N. Quintos, “Text-based gender classification of twitter data using naive bayes and svm algorithm,” in TENCON, 2021.
  37. H. Liu and M. Cocea, “Fuzzy rule based systems for gender classification from blog data,” in ICACI, pp. 79–84, 2018.
  38. C. Aravantinou, V. Simaki, I. Mporas, and V. Megalooikonomou, “Gender classification of web authors using feature selection and language models,” in Speech and Computer, pp. 226–233, 2015.
  39. W. Deitrick, Z. Miller, B. Valyou, B. Dickinson, T. Munson, and W. Hu, “Author gender prediction in an email stream using neural networks,” 2012.
  40. A. Bartle and J. Zheng, “Gender classification with deep learning,” Stanfordcs, 224d Course Project Report, pp. 1–7, 2015.
  41. E. Vasilev, “Inferring gender of reddit users,” master’s thesis, Universität Koblenz-Landau, Universitätsbibliothek, 2018.
Citations (2)

Summary

We haven't generated a summary for this paper yet.