Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

KOLD: Korean Offensive Language Dataset (2205.11315v2)

Published 23 May 2022 in cs.CL and cs.AI

Abstract: Recent directions for offensive language detection are hierarchical modeling, identifying the type and the target of offensive language, and interpretability with offensive span annotation and prediction. These improvements are focused on English and do not transfer well to other languages because of cultural and linguistic differences. In this paper, we present the Korean Offensive Language Dataset (KOLD) comprising 40,429 comments, which are annotated hierarchically with the type and the target of offensive language, accompanied by annotations of the corresponding text spans. We collect the comments from NAVER news and YouTube platform and provide the titles of the articles and videos as the context information for the annotation process. We use these annotated comments as training data for Korean BERT and RoBERTa models and find that they are effective at offensiveness detection, target classification, and target span detection while having room for improvement for target group classification and offensive span detection. We discover that the target group distribution differs drastically from the existing English datasets, and observe that providing the context information improves the model performance in offensiveness detection (+0.3), target classification (+1.5), and target group classification (+13.1). We publicly release the dataset and baseline models.

Korean Offensive Language Dataset: A Comprehensive Framework for Multi-Dimensional Offensive Language Analysis

The paper "KOLD: Korean Offensive Language Dataset" introduces the Korean Offensive Language Dataset (KOLD), which represents a substantial advancement in language-specific offensive language detection models through the creation of an annotated dataset tailored to Korean sociocultural contexts. The focus of this paper is to address the inadequacies of English-centric models and datasets when applied to Korean and potentially other non-English languages.

Dataset Development and Annotation

KOLD encompasses 40,429 comments sourced from NAVER news articles and YouTube videos, annotated with a hierarchical taxonomy that includes offensiveness, target types, target groups, and contextual information. The dataset provides annotations of offensive spans, target spans, and targeted groups tailored to Korean culture, addressing previous limitations in Korean datasets which lacked hierarchical and contextual markers.

Hierarchical Annotation Framework

The dataset is structured into three levels of annotation:

  1. Offensive Language Detection (Level A): This involves identifying whether a comment is offensive and pinpointing the offensive span within the text.
  2. Target Type Categorization (Level B): This level distinguishes whether the offensive comment is untargeted, aimed at an individual, a group, or other entities such as organizations.
  3. Target Group Identification (Level C): For group-targeted offenses, this identifies specific groups such as gender, ethnicity, political affiliation, or cultural identity.

Methodology and Evaluation

The paper employs various machine learning models, primarily Korean BERT and RoBERTa models, to evaluate KOLD's efficacy in offensiveness detection and categorization tasks. Noteworthy numerical results demonstrate improvements in model performance with title context information, significantly enhancing target group classification accuracy by 13.1% — a figure highlighting the impact of incorporating contextual data. The multi-task models outperform single-task models in span prediction, underscoring the value of joint learning in improving interpretability and span accuracy.

Cultural and Linguistic Insights

A striking observation from the dataset analysis is the cultural divergence in target group prevalence compared to English datasets. For instance, groups like Korean-Chinese and Feminists are more prominent targets in the Korean context, which are less frequently addressed in English datasets. This highlights linguistic nuances and sociopolitical differences between the communities, necessitating language-specific datasets for effective offensive language detection.

Implications and Future Work

The development of KOLD lays foundational groundwork for both theoretical and practical applications in NLP for non-English languages. On a theoretical level, it emphasizes the importance of culturally and linguistically specific approaches to hate speech detection. Practically, it provides a robust dataset for the development of models capable of adapting to complex cultural contexts, offering key insights for similar endeavors in other languages and regions.

Furthermore, the paper suggests the need for continuous updates to offensive language datasets to reflect evolving social and political climates, such as emergent hate speech patterns during crises like COVID-19. It hints at future work that could explore machine-generated datasets to balance scale and accuracy.

In summary, KOLD represents a significant step towards understanding and mitigating online offensive speech through a culturally informed lens. By publicly releasing this dataset, the authors invite further research aimed at enhancing offensive language detection systems globally, while also acknowledging ethical considerations related to the sensitive nature of the content.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Younghoon Jeong (2 papers)
  2. Juhyun Oh (9 papers)
  3. Jaimeen Ahn (3 papers)
  4. Jongwon Lee (17 papers)
  5. Jihyung Moon (7 papers)
  6. Sungjoon Park (26 papers)
  7. Alice Oh (82 papers)
Citations (20)
Github Logo Streamline Icon: https://streamlinehq.com