Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

In-game Toxic Language Detection: Shared Task and Attention Residuals (2211.05995v3)

Published 11 Nov 2022 in cs.CL

Abstract: In-game toxic language becomes the hot potato in the gaming industry and community. There have been several online game toxicity analysis frameworks and models proposed. However, it is still challenging to detect toxicity due to the nature of in-game chat, which has extremely short length. In this paper, we describe how the in-game toxic language shared task has been established using the real-world in-game chat data. In addition, we propose and introduce the model/framework for toxic language token tagging (slot filling) from the in-game chat. The data and code will be released.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yuanzhe Jia (4 papers)
  2. Weixuan Wu (1 paper)
  3. Feiqi Cao (9 papers)
  4. Soyeon Caren Han (48 papers)
Citations (2)