2000 character limit reached
RECAST: Interactive Auditing of Automatic Toxicity Detection Models (2001.01819v2)
Published 7 Jan 2020 in cs.CL, cs.CY, and cs.LG
Abstract: As toxic language becomes nearly pervasive online, there has been increasing interest in leveraging the advancements in NLP, from very large transformer models to automatically detecting and removing toxic comments. Despite the fairness concerns, lack of adversarial robustness, and limited prediction explainability for deep learning systems, there is currently little work for auditing these systems and understanding how they work for both developers and users. We present our ongoing work, RECAST, an interactive tool for examining toxicity detection models by visualizing explanations for predictions and providing alternative wordings for detected toxic speech.
- Austin P. Wright (13 papers)
- Omar Shaikh (23 papers)
- Haekyu Park (21 papers)
- Will Epperson (9 papers)
- Muhammed Ahmed (2 papers)
- Stephane Pinel (2 papers)
- Diyi Yang (151 papers)
- Duen Horng Chau (109 papers)