Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploring Cross-Cultural Differences in English Hate Speech Annotations: From Dataset Construction to Analysis (2308.16705v3)

Published 31 Aug 2023 in cs.CL and cs.AI

Abstract: Warning: this paper contains content that may be offensive or upsetting. Most hate speech datasets neglect the cultural diversity within a single language, resulting in a critical shortcoming in hate speech detection. To address this, we introduce CREHate, a CRoss-cultural English Hate speech dataset. To construct CREHate, we follow a two-step procedure: 1) cultural post collection and 2) cross-cultural annotation. We sample posts from the SBIC dataset, which predominantly represents North America, and collect posts from four geographically diverse English-speaking countries (Australia, United Kingdom, Singapore, and South Africa) using culturally hateful keywords we retrieve from our survey. Annotations are collected from the four countries plus the United States to establish representative labels for each country. Our analysis highlights statistically significant disparities across countries in hate speech annotations. Only 56.2% of the posts in CREHate achieve consensus among all countries, with the highest pairwise label difference rate of 26%. Qualitative analysis shows that label disagreement occurs mostly due to different interpretations of sarcasm and the personal bias of annotators on divisive topics. Lastly, we evaluate LLMs under a zero-shot setting and show that current LLMs tend to show higher accuracies on Anglosphere country labels in CREHate. Our dataset and codes are available at: https://github.com/nlee0212/CREHate

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Nayeon Lee (28 papers)
  2. Chani Jung (2 papers)
  3. Junho Myung (14 papers)
  4. Jiho Jin (15 papers)
  5. Jose Camacho-Collados (58 papers)
  6. Juho Kim (56 papers)
  7. Alice Oh (82 papers)
Citations (7)

Summary

Analysis of "CREHate: A CRoss-cultural English Hate Speech Dataset"

The paper "CREHate: A CRoss-cultural English Hate Speech Dataset" introduces a rigorously constructed dataset designed to address the limitations of existing hate speech detection datasets, particularly their neglect of cultural diversity among annotators and language speakers. This dataset is a meaningful contribution to the field of NLP, especially in tasks that require cultural sensitivity.

Overview and Methodology

CREHate is a dataset comprising 1,580 online posts annotated by individuals from five distinct English-speaking countries: Australia, the United Kingdom, Singapore, the United States, and South Africa. The dataset aims to capture the varying cultural interpretations of hate speech by involving annotators and content from culturally diverse backgrounds. The dataset construction involves two main steps: culture-specific post collection and cross-cultural annotation.

  • Culture-Specific Post Collection: Posts were sourced both from the SBIC dataset, which has a North American focus, and from platforms like Reddit and YouTube from four additional English-speaking countries. This approach ensures a more balanced representation of culturally specific content and keywords.
  • Cross-Cultural Annotation: Posts were labeled by annotators from each of the five countries, with the goal of obtaining culturally representative labels. These annotations reveal significant disparities, with only 56.2% of posts reaching consensus among all countries.

Significant Findings

The analysis of the collected annotations uncovers marked variations in hate speech perceptions across cultures. Pairwise label disagreements show an average rate of 21.2%, with the chi-squared tests revealing statistically significant differences across various demographic facets such as country and race. The inclusion of cultural context is found to pose notable challenges in annotation consistency, with culture-background-dependent subjective and ambiguous content requiring more nuanced interpretations.

Theoretical and Practical Implications

The findings have profound implications for both the theoretical understanding of hate speech detection and the practical development of NLP models. The work demonstrates that current hate speech classifiers tend to overlook the complex role of cultural context, an oversight that could hinder their applicability across different regions. This dataset opens pathways for future research on culturally adaptive models and highlights the necessity for datasets that reflect diverse cultural contexts.

Model Development

The paper explores the performance of several hate speech classifiers when trained with CREHate, showing that culturally aware models outperform monoculturally-trained models in predicting country-specific annotations. Techniques like multi-task learning and culture tagging are shown to improve model performance, suggesting new directions for the development of culturally sensitive content moderation systems.

Future Directions

The introduction of CREHate prompts a reevaluation of dataset and model development strategies in NLP. Future studies may consider expanding this work beyond the English language, incorporating a more diverse set of cultures, and exploring intra-country cultural variations. Furthermore, researchers could investigate similar methodologies for other subjective NLP tasks, such as irony detection or commonsense reasoning.

In conclusion, CREHate serves as a pivotal resource in advancing cross-cultural sensitivity in hate speech detection, fostering more inclusive technological solutions, and ultimately contributing to the creation of more equitable online environments.