Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AustroTox: A Dataset for Target-Based Austrian German Offensive Language Detection (2406.08080v1)

Published 12 Jun 2024 in cs.CL, cs.AI, and cs.CY

Abstract: Model interpretability in toxicity detection greatly profits from token-level annotations. However, currently such annotations are only available in English. We introduce a dataset annotated for offensive language detection sourced from a news forum, notable for its incorporation of the Austrian German dialect, comprising 4,562 user comments. In addition to binary offensiveness classification, we identify spans within each comment constituting vulgar language or representing targets of offensive statements. We evaluate fine-tuned LLMs as well as LLMs in a zero- and few-shot fashion. The results indicate that while fine-tuned models excel in detecting linguistic peculiarities such as vulgar dialect, LLMs demonstrate superior performance in detecting offensiveness in AustroTox. We publish the data and code.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Pia Pachinger (1 paper)
  2. Janis Goldzycher (7 papers)
  3. Anna Maria Planitzer (1 paper)
  4. Wojciech Kusa (16 papers)
  5. Allan Hanbury (45 papers)
  6. Julia Neidhardt (5 papers)
Citations (2)