Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Revisiting The Classics: A Study on Identifying and Rectifying Gender Stereotypes in Rhymes and Poems (2403.11752v2)

Published 18 Mar 2024 in cs.CL

Abstract: Rhymes and poems are a powerful medium for transmitting cultural norms and societal roles. However, the pervasive existence of gender stereotypes in these works perpetuates biased perceptions and limits the scope of individuals' identities. Past works have shown that stereotyping and prejudice emerge in early childhood, and developmental research on causal mechanisms is critical for understanding and controlling stereotyping and prejudice. This work contributes by gathering a dataset of rhymes and poems to identify gender stereotypes and propose a model with 97% accuracy to identify gender bias. Gender stereotypes were rectified using a LLM and its effectiveness was evaluated in a comparative survey against human educator rectifications. To summarize, this work highlights the pervasive nature of gender stereotypes in literary works and reveals the potential of LLMs to rectify gender stereotypes. This study raises awareness and promotes inclusivity within artistic expressions, making a significant contribution to the discourse on gender equality.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (33)
  1. Glen, glenda or glendale: Unsupervised and semi-supervised learning of English noun gender. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009), pages 120–128, Boulder, Colorado. Association for Computational Linguistics.
  2. Rebecca S. Bigler and Lynn S. Liben. 2007. Developmental intergroup theory: Explaining and reducing children’s social stereotyping and prejudice. Current Directions in Psychological Science, 16(3):162–166.
  3. Amy M Blackstone. 2003. Gender roles and society. ABC-CLIO.
  4. Quantifying and reducing stereotypes in word embeddings. arXiv preprint arXiv:1606.06121.
  5. Sebastian Bordt and Ulrike von Luxburg. 2023. Chatgpt participates in a computer science exam. arXiv preprint arXiv:2303.09461.
  6. Gender stereotypes in natural language: Word embeddings show robust consistency across child and adult language corpora of more than 65 million words. Psychological Science, 32(2):218–240. PMID: 33400629.
  7. An annotated corpus for sexism detection in French tweets. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 1397–1403, Marseille, France. European Language Resources Association.
  8. Chatgpt goes to law school. Available at SSRN.
  9. Knut De Swert. 2012. Calculating inter-coder reliability in media content analysis using krippendorff’s alpha. Center for Politics and Communication, 15:1–15.
  10. Milton Diamond. 2002. Sex and gender are different: Sexual identity and gender identity are different. Clinical Child Psychology and Psychiatry, 7(3):320–334.
  11. Exploring human gender stereotypes with word association test. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6133–6143, Hong Kong, China. Association for Computational Linguistics.
  12. Shirtless and dangerous: Quantifying linguistic signals of gender bias in an online fiction writing community. In Proceedings of the International AAAI Conference on Web and Social Media, volume 10, pages 112–120.
  13. Online hate speech against women: Automatic identification of misogyny and sexism on twitter. Journal of Intelligent & Fuzzy Systems, 36(5):4743–4752.
  14. How does grammatical gender affect noun representations in gender-marking languages? In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 463–471, Hong Kong, China. Association for Computational Linguistics.
  15. Mother Goose. P.F. Volland.
  16. The times they are a-changing … or are they not? a comparison of gender stereotypes, 1983–2014. Psychology of Women Quarterly, 40:353 – 363.
  17. Nuria Haristiani. 2019. Artificial intelligence (ai) chatbot as language learning medium: An inquiry. Journal of Physics: Conference Series, 1387(1):012020.
  18. M. Heilman. 2001. Description and prescription: How gender stereotypes prevent women’s ascent up the organizational ladder. Journal of Social Issues, 57:657–674.
  19. Kazi Md Mukitul Islam and M Niaz Asadullah. 2018. Gender stereotypes and education: A comparative content analysis of malaysian, indonesian, pakistani and bangladeshi school textbooks. PloS one, 13(1):e0190807.
  20. Jeffrey S. Kane. 1996. The conceptualization and representation of total performance effectiveness. Human Resource Management Review, 6(2):123–145.
  21. Barack’s wife hillary: Using knowledge graphs for fact-aware language modeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5962–5971, Florence, Italy. Association for Computational Linguistics.
  22. Steven Loria et al. 2018. textblob documentation. Release 0.15, 2(8).
  23. Gender Bias in Neural Natural Language Processing, pages 189–202. Springer International Publishing, Cham.
  24. Chatgpt passing usmle shines a spotlight on the flaws of medical education.
  25. StereoSet: Measuring stereotypical bias in pretrained language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5356–5371, Online. Association for Computational Linguistics.
  26. Ardell Nadesan. 1974. Mother goose: Sexist? Elementary English, 51(3):375–378.
  27. OpenAI. 2022. ChatGPT: Large-scale language models. https://openai.com/blog/chatgpt.
  28. Reducing gender bias in abusive language detection. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2799–2804, Brussels, Belgium. Association for Computational Linguistics.
  29. Danijela Prosic-Santovac. 2015. Making the match: Traditional nursery rhymes and teaching english to modern children. CLELE journal, pages 25–48.
  30. Debiasing embeddings for reduced gender bias in text classification. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 69–75, Florence, Italy. Association for Computational Linguistics.
  31. Navid Rekabsaz and Markus Schedl. 2020. Do neural ranking models intensify gender bias? In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, New York, NY, USA. Association for Computing Machinery.
  32. Automatic classification of sexism in social networks: An empirical study on twitter data. IEEE Access, 8:219563–219576.
  33. Gender differences in caregiving among family-caregivers of people with mental illnesses. World journal of psychiatry, 6(1):7.

Summary

We haven't generated a summary for this paper yet.