Toxicity Detection is NOT all you Need: Measuring the Gaps to Supporting Volunteer Content Moderators (2311.07879v4)
Abstract: Extensive efforts in automated approaches for content moderation have been focused on developing models to identify toxic, offensive, and hateful content with the aim of lightening the load for moderators. Yet, it remains uncertain whether improvements on those tasks have truly addressed moderators' needs in accomplishing their work. In this paper, we surface gaps between past research efforts that have aimed to provide automation for aspects of content moderation and the needs of volunteer content moderators, regarding identifying violations of various moderation rules. To do so, we conduct a model review on Hugging Face to reveal the availability of models to cover various moderation rules and guidelines from three exemplar forums. We further put state-of-the-art LLMs to the test, evaluating how well these models perform in flagging violations of platform rules from one particular forum. Finally, we conduct a user survey study with volunteer moderators to gain insight into their perspectives on useful moderation models. Overall, we observe a non-trivial gap, as missing developed models and LLMs exhibit moderate to low performance on a significant portion of the rules. Moderators' reports provide guides for future work on developing moderation assistant models.
- Amy Binns. 2012. Don’t feed the trolls! Managing troublemakers in magazines’ online communities. Journalism practice, 6(4):547–562.
- The internet’s hidden rules: An empirical study of reddit norm violations at micro, meso, and macro scales. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW):1–25.
- Julian Dibbell. 1994. A rape in cyberspace or how an evil clown, a haitian trickster spirit, two wizards, and a cast of dozens turned a database into a society. Ann. Surv. Am. L., page 471.
- Bryan Dosono and Bryan Semaan. 2019. Moderation practices as emotional labor in sustaining online communities: The case of AAPI identity work on Reddit. In Proceedings of the 2019 CHI conference on human factors in computing systems, pages 1–13.
- Reddit rules! Characterizing an ecosystem of governance. Proceedings of the International AAAI Conference on Web and Social Media, 12(11).
- Sarah A Gilbert. 2020. "I run the world’s largest historical outreach project and it’s on a cesspool of a website." Moderating a public scholarship site on Reddit: A case study of r/AskHistorians. Proceedings of the ACM on Human-Computer Interaction, 4(CSCW1):1–27.
- Hate raids on twitch: Echoes of the past, new modalities, and implications for platform governance. Proceedings of the ACM on Human-Computer Interaction, 7(CSCW1):1–28.
- Human-machine collaboration for content regulation: The case of reddit automoderator. ACM Transactions on Computer-Human Interaction (TOCHI), 26(5):1–35.
- Moderation challenges in voice-based online communities on discord. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW):1–23.
- Conversational AI Jigsaw. 2019. Jigsaw unintended bias in toxicity classification. Kaggle.
- Charles Kiene and Benjamin Mako Hill. 2020. Who uses bots? A statistical analysis of bot usage in moderation teams. In Extended abstracts of the 2020 CHI conference on human factors in computing systems, pages 1–8.
- Surviving an "eternal september" how an online community managed a surge of newcomers. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pages 1152–1156.
- Watch your language: Large language models and content moderation. (arXiv:2309.14517). ArXiv:2309.14517 [cs].
- A new generation of perspective api: Efficient multilingual character-level transformers. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD ’22, page 3197–3207, New York, NY, USA. Association for Computing Machinery.
- Measuring the monetary value of online volunteer work. In Proceedings of the International AAAI Conference on Web and Social Media, volume 16, pages 596–606.
- A holistic approach to undesired content detection in the real world. Proceedings of the AAAI Conference on Artificial Intelligence, 37(12):15009–15018.
- Ji Ho Park and Pascale Fung. 2017. One-step and two-step classification for abusive language detection on Twitter. In Proceedings of the First Workshop on Abusive Language Online, pages 41–45, Vancouver, BC, Canada. Association for Computational Linguistics.
- Toxicity detection: Does context really matter? In Annual Meeting of the Association for Computational Linguistics.
- Six attributes of unhealthy conversation. arXiv:2010.07410 [cs]. ArXiv: 2010.07410.
- Joseph Seering. 2020. Reconsidering community self-moderation: the role of research in supporting community-based models for online content moderation. Proceedings of the ACM on Human-Computer Interaction, 4:107.
- The social roles of bots: evaluating impact of bots on discussions in online communities. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW):1–29.
- Moderator engagement and community development in the age of algorithms. New Media & Society, 21(7):1417–1443.
- Jessica Shieh. 2023. Best practices for prompt engineering with openai api. [Online; accessed Nov 2023].
- Identifying machine-paraphrased plagiarism. In Information for a Better World: Shaping the Global Future, pages 393–413, Cham. Springer International Publishing.
- Donghee Yvette Wohn. 2019. Volunteer moderators in twitch micro communities: How they get involved, the roles they play, and the emotional labor they experience. In Proceedings of the 2019 CHI conference on human factors in computing systems, pages 1–13.
- Predicting the type and target of offensive posts in social media. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1415–1420, Minneapolis, Minnesota. Association for Computational Linguistics.
- Yang Trista Cao (7 papers)
- Lovely-Frances Domingo (1 paper)
- Sarah Ann Gilbert (1 paper)
- Michelle Mazurek (2 papers)
- Katie Shilton (1 paper)
- Hal Daumé III (76 papers)