Towards Better Inclusivity: A Diverse Tweet Corpus of English Varieties (2401.11487v1)
Abstract: The prevalence of social media presents a growing opportunity to collect and analyse examples of English varieties. Whilst usage of these varieties was - and, in many cases, still is - used only in spoken contexts or hard-to-access private messages, social media sites like Twitter provide a platform for users to communicate informally in a scrapeable format. Notably, Indian English (Hinglish), Singaporean English (Singlish), and African-American English (AAE) can be commonly found online. These varieties pose a challenge to existing NLP tools as they often differ orthographically and syntactically from standard English for which the majority of these tools are built. NLP models trained on standard English texts produced biased outcomes for users of underrepresented varieties. Some research has aimed to overcome the inherent biases caused by unrepresentative data through techniques like data augmentation or adjusting training models. We aim to address the issue of bias at its root - the data itself. We curate a dataset of tweets from countries with high proportions of underserved English variety speakers, and propose an annotation framework of six categorical classifications along a pseudo-spectrum that measures the degree of standard English and that thereby indirectly aims to surface the manifestations of English varieties in these tweets. Following best annotation practices, our growing corpus features 170,800 tweets taken from 7 countries, labeled by annotators who are from those countries and can communicate in regionally-dominant varieties of English. Our corpus highlights the accuracy discrepancies in pre-trained language identifiers between western English and non-western (i.e., less standard) English varieties. We hope to contribute to the growing literature identifying and reducing the implicit demographic discrepancies in NLP.
- John Blake. 2018. Inter-annotator agreement: By hook or by crook.
- Grant Blank and Christoph Lutz. 2017. Representativeness of social media in great britain: investigating facebook, linkedin, twitter, pinterest, google+, and instagram. American Behavioral Scientist, 61(7):741–756.
- Language (technology) is power: A critical survey of “bias” in NLP. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5454–5476, Online. Association for Computational Linguistics.
- Su Lin Blodgett and Brendan O’Connor. 2017. Racial disparity in natural language processing: A case study of social media african-american english. CoRR, abs/1707.00061.
- Paul Cook and Laurel J Brinton. 2017. Building and evaluating web corpora representing national varieties of english. Language Resources and Evaluation, 51(3):643–662.
- English accent accuracy analysis in a state-of-the-art automatic speech recognition system.
- Evaluating and mitigating inherent linguistic bias of African American English through inference. In Proceedings of the 29th International Conference on Computational Linguistics, pages 1442–1454, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
- Dataset geography: Mapping language data to language users. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3381–3411, Dublin, Ireland. Association for Computational Linguistics.
- Interrater agreement and interrater reliability: Key concepts, approaches, and applications. Research in Social and Administrative Pharmacy, 9(3):330–338.
- Sidney Greenbaum and Gerald Nelson. 1996. The international corpus of english (ice) project.
- Jennifer Jenkins. 2009. English as a lingua franca: interpretations and attitudes. World Englishes, 28(2):200–207.
- Varieties of English, volume 1. Mouton de Gruyter Berlin.
- Editors: Silvia Kouwenberg and John Victor Singler. 2008. The Handbook of Pidgin and Creole Studies. Wiley-Blackwell, Chichester, West Sussex, and Malden, MA.
- J. Richard Landis and Gary G. Koch. 1977. The measurement of observer agreement for categorical data. Biometrics, 33(1):159–174.
- Edward Loper and Steven Bird. 2002. Nltk: The natural language toolkit.
- Do all birds tweet the same? characterizing twitter around the world. In Proceedings of the 20th ACM international conference on Information and knowledge management, pages 1025–1030.
- Nhi Pham (3 papers)
- Lachlan Pham (1 paper)
- Adam L. Meyers (1 paper)