Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ANDES at SemEval-2020 Task 12: A jointly-trained BERT multilingual model for offensive language detection (2008.06408v1)

Published 13 Aug 2020 in cs.CL

Abstract: This paper describes our participation in SemEval-2020 Task 12: Multilingual Offensive Language Detection. We jointly-trained a single model by fine-tuning Multilingual BERT to tackle the task across all the proposed languages: English, Danish, Turkish, Greek and Arabic. Our single model had competitive results, with a performance close to top-performing systems in spite of sharing the same parameters across all languages. Zero-shot and few-shot experiments were also conducted to analyze the transference performance among these languages. We make our code public for further research

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Juan Manuel Pérez (10 papers)
  2. Aymé Arango (2 papers)
  3. Franco Luque (4 papers)
Citations (3)