Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Can LLMs Recognize Toxicity? A Structured Investigation Framework and Toxicity Metric (2402.06900v5)

Published 10 Feb 2024 in cs.CL and cs.AI

Abstract: In the pursuit of developing LLMs that adhere to societal standards, it is imperative to detect the toxicity in the generated text. The majority of existing toxicity metrics rely on encoder models trained on specific toxicity datasets, which are susceptible to out-of-distribution (OOD) problems and depend on the dataset's definition of toxicity. In this paper, we introduce a robust metric grounded on LLMs to flexibly measure toxicity according to the given definition. We first analyze the toxicity factors, followed by an examination of the intrinsic toxic attributes of LLMs to ascertain their suitability as evaluators. Finally, we evaluate the performance of our metric with detailed analysis. Our empirical results demonstrate outstanding performance in measuring toxicity within verified factors, improving on conventional metrics by 12 points in the F1 score. Our findings also indicate that upstream toxicity significantly influences downstream metrics, suggesting that LLMs are unsuitable for toxicity evaluations within unverified factors.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Hyukhun Koh (8 papers)
  2. Dohyung Kim (23 papers)
  3. Minwoo Lee (31 papers)
  4. Kyomin Jung (76 papers)
Citations (3)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets