Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large Language Models for Automatic Detection of Sensitive Topics (2409.00940v1)

Published 2 Sep 2024 in cs.CL and cs.AI

Abstract: Sensitive information detection is crucial in content moderation to maintain safe online communities. Assisting in this traditionally manual process could relieve human moderators from overwhelming and tedious tasks, allowing them to focus solely on flagged content that may pose potential risks. Rapidly advancing LLMs are known for their capability to understand and process natural language and so present a potential solution to support this process. This study explores the capabilities of five LLMs for detecting sensitive messages in the mental well-being domain within two online datasets and assesses their performance in terms of accuracy, precision, recall, F1 scores, and consistency. Our findings indicate that LLMs have the potential to be integrated into the moderation workflow as a convenient and precise detection tool. The best-performing model, GPT-4o, achieved an average accuracy of 99.5\% and an F1-score of 0.99. We discuss the advantages and potential challenges of using LLMs in the moderation workflow and suggest that future research should address the ethical considerations of utilising this technology.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Ruoyu Wen (2 papers)
  2. Stephanie Elena Crowe (1 paper)
  3. Kunal Gupta (12 papers)
  4. Xinyue Li (34 papers)
  5. Mark Billinghurst (11 papers)
  6. Simon Hoermann (2 papers)
  7. Dwain Allan (1 paper)
  8. Alaeddin Nassani (3 papers)
  9. Thammathip Piumsomboon (1 paper)