Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploring Transformer Based Models to Identify Hate Speech and Offensive Content in English and Indo-Aryan Languages (2111.13974v1)

Published 27 Nov 2021 in cs.CL

Abstract: Hate speech is considered to be one of the major issues currently plaguing online social media. Repeated and repetitive exposure to hate speech has been shown to create physiological effects on the target users. Thus, hate speech, in all its forms, should be addressed on these platforms in order to maintain good health. In this paper, we explored several Transformer based machine learning models for the detection of hate speech and offensive content in English and Indo-Aryan languages at FIRE 2021. We explore several models such as mBERT, XLMR-large, XLMR-base by team name "Super Mario". Our models came 2nd position in Code-Mixed Data set (Macro F1: 0.7107), 2nd position in Hindi two-class classification(Macro F1: 0.7797), 4th in English four-class category (Macro F1: 0.8006) and 12th in English two-class category (Macro F1: 0.6447).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Somnath Banerjee (22 papers)
  2. Maulindu Sarkar (2 papers)
  3. Nancy Agrawal (1 paper)
  4. Punyajoy Saha (27 papers)
  5. Mithun Das (16 papers)
Citations (35)

Summary

We haven't generated a summary for this paper yet.