Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GUIR at SemEval-2020 Task 12: Domain-Tuned Contextualized Models for Offensive Language Detection (2007.14477v1)

Published 28 Jul 2020 in cs.CL

Abstract: Offensive language detection is an important and challenging task in natural language processing. We present our submissions to the OffensEval 2020 shared task, which includes three English sub-tasks: identifying the presence of offensive language (Sub-task A), identifying the presence of target in offensive language (Sub-task B), and identifying the categories of the target (Sub-task C). Our experiments explore using a domain-tuned contextualized LLM (namely, BERT) for this task. We also experiment with different components and configurations (e.g., a multi-view SVM) stacked upon BERT models for specific sub-tasks. Our submissions achieve F1 scores of 91.7% in Sub-task A, 66.5% in Sub-task B, and 63.2% in Sub-task C. We perform an ablation study which reveals that domain tuning considerably improves the classification performance. Furthermore, error analysis shows common misclassification errors made by our model and outlines research directions for future.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Sajad Sotudeh (11 papers)
  2. Tong Xiang (11 papers)
  3. Hao-Ren Yao (7 papers)
  4. Sean MacAvaney (75 papers)
  5. Eugene Yang (37 papers)
  6. Nazli Goharian (43 papers)
  7. Ophir Frieder (24 papers)
Citations (11)