Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Tokenization Matters: Navigating Data-Scarce Tokenization for Gender Inclusive Language Technologies (2312.11779v3)

Published 19 Dec 2023 in cs.CL, cs.AI, and cs.LG

Abstract: Gender-inclusive NLP research has documented the harmful limitations of gender binary-centric LLMs (LLM), such as the inability to correctly use gender-diverse English neopronouns (e.g., xe, zir, fae). While data scarcity is a known culprit, the precise mechanisms through which scarcity affects this behavior remain underexplored. We discover LLM misgendering is significantly influenced by Byte-Pair Encoding (BPE) tokenization, the tokenizer powering many popular LLMs. Unlike binary pronouns, BPE overfragments neopronouns, a direct consequence of data scarcity during tokenizer training. This disparate tokenization mirrors tokenizer limitations observed in multilingual and low-resource NLP, unlocking new misgendering mitigation strategies. We propose two techniques: (1) pronoun tokenization parity, a method to enforce consistent tokenization across gendered pronouns, and (2) utilizing pre-existing LLM pronoun knowledge to improve neopronoun proficiency. Our proposed methods outperform finetuning with standard BPE, improving neopronoun accuracy from 14.1% to 58.4%. Our paper is the first to link LLM misgendering to tokenization and deficient neopronoun grammar, indicating that LLMs unable to correctly treat neopronouns as pronouns are more prone to misgender.

Overview of "Tokenization Matters: Navigating Data-Scarce Tokenization for Gender Inclusive Language Technologies"

The paper entitled "Tokenization Matters: Navigating Data-Scarce Tokenization for Gender Inclusive Language Technologies" investigates the intersection of gender inclusivity and NLP, focusing on the disparate tokenization patterns for gendered and non-binary pronouns by LLMs. The authors, Anaelia Ovalle et al., undertake a detailed exploration of how Byte-Pair Encoding (BPE), the prevalent tokenization mechanism, disproportionately fragments neopronouns. This underrepresentation leads to syntactic difficulties and misgendering by models trained on corpora lacking sufficient gender-diverse data.

Key Contributions and Findings

  1. Impact of BPE Tokenization:
    • The research highlights that BPE tokenization results in overfragmentation of neopronouns due to their low frequency in training datasets. Unlike binary pronouns, which are typically preserved as single tokens, neopronouns are split into multiple subword tokens.
    • This fragmentation complicates the processing of syntactic structures, leading to poor performance in tasks such as Part-of-Speech (POS) tagging and dependency parsing, as evidenced by substantial prior work.
  2. Evaluation of Misgendering:
    • The paper develops a series of metrics to assess LLMs on their use of neopronouns. The metrics include pronoun consistency, case agreement errors, and adversarial injection errors.
    • They found that these errors correlate strongly with the extent of token fragmentation, pointing to a direct link between poor tokenization handling and misgendering.
  3. Mitigation Strategies:
    • Two primary strategies are proposed to enhance LLMs' handling of neopronouns: Pronoun Tokenization Parity (PTP) and leveraging existing LLM pronoun knowledge.
    • PTP aligns tokenization of neopronouns with that of binary pronouns by introducing new token embeddings and aims to preserve the morphemic integrity of pronouns.
    • The modified finetuning approach, which involves only adjusting the lexical layer, leverages the pre-existing knowledge in LLMs to improve context-related performance without retraining the entire network.
  4. Experimental Results:
    • Experiments using PTP and lexical layer finetuning show a significant increase in neopronoun consistency, from 14.1% with standard BPE finetuning to 58.4%.
    • These results were consistent across varying LLM model sizes and demonstrated a reduction in adversarial injection errors, thus underscoring the utility of enhancing grammatical proficiency to reduce misgendering.

Implications and Future Directions

The findings in this paper underscore the critical role of tokenization in developing more inclusive NLP systems. The proposed strategies not only improve the accuracy of LLMs in handling neopronouns but also offer a blueprint for future work in NLP focusing on fairness and inclusivity. By advancing methods to handle low-resource tokenization challenges, this work could extend its applications to multilingual NLP tasks and other linguistic scenarios where token frequency disparity leads to biased model behavior.

Future research deriving from these findings could involve exploring the scalability of PTP in multilingual LLMs and testing new tokenization algorithms that inherently balance representation across diverse linguistic categories. As NLP systems increasingly become integrated into societal structures, ensuring their inclusiveness and fairness remains a priority, and this paper provides a valuable contribution toward that endeavor in gender-inclusive language technologies.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (42)
  1. How effective is byte pair encoding for out-of-vocabulary words in neural machine translation? In Conference of the Association for Machine Translation in the Americas, 2022. URL https://api.semanticscholar.org/CorpusID:251468147.
  2. Bilingual lexicon induction through unsupervised machine translation. ArXiv, abs/1907.10761, 2019a. URL https://api.semanticscholar.org/CorpusID:196187034.
  3. On the cross-lingual transferability of monolingual representations. In Annual Meeting of the Association for Computational Linguistics, 2019b. URL https://api.semanticscholar.org/CorpusID:204901567.
  4. Transfer learning from pre-trained bert for pronoun resolution. In Proceedings of the first workshop on gender bias in natural language processing, pages 82–88, 2019.
  5. Pythia: A suite for analyzing large language models across training and scaling. ArXiv, abs/2304.01373, 2023. URL https://api.semanticscholar.org/CorpusID:257921893.
  6. Byte pair encoding is suboptimal for language model pretraining. In Findings, 2020. URL https://api.semanticscholar.org/CorpusID:215416175.
  7. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
  8. Quality of word vectors and its impact on named entity recognition in czech. European Journal of Business Science and Technology, 2020. URL https://api.semanticscholar.org/CorpusID:231686297.
  9. As good as new. how to successfully recycle english gpt-2 to make models for other languages. ArXiv, abs/2012.05628, 2020. URL https://api.semanticscholar.org/CorpusID:228083868.
  10. Harms of gender exclusivity and challenges in non-binary representation in language technologies. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1968–1994, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.150. URL https://aclanthology.org/2021.emnlp-main.150.
  11. How much does tokenization affect neural machine translation? In Conference on Intelligent Text Processing and Computational Linguistics, 2018. URL https://api.semanticscholar.org/CorpusID:56517054.
  12. Winoqueer: A community-in-the-loop benchmark for anti-lgbtq+ bias in large language models. ArXiv, abs/2306.15087, 2023. URL https://api.semanticscholar.org/CorpusID:259262064.
  13. M.D. Fortescue. Historical Linguistics 2003: Selected Papers from the 16th International Conference on Historical Linguistics, Copenhagen, 11-15 August 2003. Amsterdam Studies in the Theory and History of Linguistic Science: 4. J. Benjamins Pub., 2005. ISBN 978-1-58811-586-7.
  14. B.A. Garner. The Chicago Guide to Grammar, Usage, and Punctuation. Chicago Guides to Writing, Editing, and Publishing. University of Chicago Press, 2016. ISBN 978-0-226-19129-4.
  15. Gender Census. 2023 gender census, 2023. URL https://www.gendercensus.com/results/2023-worldwide/#pronouns. Accessed: September 14, 2023.
  16. From characters to words: the turning point of bpe merges. In Conference of the European Chapter of the Association for Computational Linguistics, 2021. URL https://api.semanticscholar.org/CorpusID:233189533.
  17. spaCy: Industrial-strength Natural Language Processing in Python. 2020. doi: 10.5281/zenodo.1212303.
  18. Misgendered: Limits of large language models in understanding pronouns. arXiv preprint arXiv:2306.03950, 2023.
  19. Better oov translation with bilingual terminology mining. In Annual Meeting of the Association for Computational Linguistics, 2019. URL https://api.semanticscholar.org/CorpusID:196193105.
  20. Welcome to the modern world of pronouns: Identity-inclusive natural language processing beyond gender. arXiv preprint arXiv:2202.11923, 2022.
  21. Man is to person as woman is to location: Measuring gender bias in named entity recognition. Proceedings of the 31st ACM Conference on Hypertext and Social Media, 2019. URL https://api.semanticscholar.org/CorpusID:204851964.
  22. Ehm Hjorth Miltersen. Nounself pronouns: 3rd person personal pronouns as identity expression. Journal of Language Works-Sprogvidenskabeligt Studentertidsskrift, 1(1):37–62, 2016.
  23. Measuring harmful sentence completion in language models for lgbtqia+ individuals. In Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion. Association for Computational Linguistics, 2022.
  24. Bound by the bounty: Collaboratively shaping evaluation processes for queer ai harms. Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, 2023. URL https://api.semanticscholar.org/CorpusID:259991807.
  25. “i’m fully who i am”: Towards centering transgender and non-binary voices to measure biases in open language generation. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, pages 1246–1266, 2023.
  26. Morphology matters: A multilingual language modeling analysis. Transactions of the Association for Computational Linguistics, 9:261–276, 2020. URL https://api.semanticscholar.org/CorpusID:228375396.
  27. Penny Eckert and Ivan A. Sag. Morphology, 2011. URL https://web.stanford.edu/class/linguist1/Slides/morph2-slides.pdf. Accessed: 2023.
  28. Mimicking word embeddings using subword rnns. ArXiv, abs/1707.06961, 2017. URL https://api.semanticscholar.org/CorpusID:10361075.
  29. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
  30. Gender bias in coreference resolution. arXiv preprint arXiv:1804.09301, 2018.
  31. How good is your tokenizer? on the monolingual performance of multilingual language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3118–3135, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.243. URL https://aclanthology.org/2021.acl-long.243.
  32. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86–96, Berlin, Germany, August 2016. Association for Computational Linguistics. doi: 10.18653/v1/P16-1009. URL https://aclanthology.org/P16-1009.
  33. A survey on gender bias in natural language processing. arXiv preprint arXiv:2112.14168, 2021.
  34. Evaluating gender bias in machine translation. arXiv preprint arXiv:1906.00591, 2019.
  35. Mitigating gender bias in natural language processing: Literature review. arXiv preprint arXiv:1906.08976, 2019.
  36. Back to the future: On potential histories in nlp. ArXiv, abs/2210.06245, 2022. URL https://api.semanticscholar.org/CorpusID:252846634.
  37. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
  38. Improving pre-trained multilingual model with vocabulary expansion. ArXiv, abs/1909.12440, 2019. URL https://api.semanticscholar.org/CorpusID:203586495.
  39. Miner: Improving out-of-vocabulary named entity recognition from an information theoretic perspective. In Annual Meeting of the Association for Computational Linguistics, 2022. URL https://api.semanticscholar.org/CorpusID:248085423.
  40. Hmm based part-of-speech tagger f or bahasa indonesia. 2010. URL https://api.semanticscholar.org/CorpusID:63251684.
  41. Incorporating context into subword vocabularies. In Conference of the European Chapter of the Association for Computational Linguistics, 2022. URL https://api.semanticscholar.org/CorpusID:252872957.
  42. Gender bias in coreference resolution: Evaluation and debiasing methods. In North American Chapter of the Association for Computational Linguistics, 2018. URL https://api.semanticscholar.org/CorpusID:4952494.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Anaelia Ovalle (16 papers)
  2. Ninareh Mehrabi (26 papers)
  3. Palash Goyal (31 papers)
  4. Jwala Dhamala (22 papers)
  5. Kai-Wei Chang (292 papers)
  6. Richard Zemel (82 papers)
  7. Aram Galstyan (142 papers)
  8. Rahul Gupta (146 papers)
  9. Yuval Pinter (41 papers)
Citations (5)
X Twitter Logo Streamline Icon: https://streamlinehq.com