Efficient Toxic Content Detection by Bootstrapping and Distilling Large Language Models (2312.08303v1)
Abstract: Toxic content detection is crucial for online services to remove inappropriate content that violates community standards. To automate the detection process, prior works have proposed varieties of ML approaches to train LLMs (LMs) for toxic content detection. However, both their accuracy and transferability across datasets are limited. Recently, LLMs have shown promise in toxic content detection due to their superior zero-shot and few-shot in-context learning ability as well as broad transferability on ML tasks. However, efficiently designing prompts for LLMs remains challenging. Moreover, the high run-time cost of LLMs may hinder their deployments in production. To address these challenges, in this work, we propose BD-LLM, a novel and efficient approach to Bootstrapping and Distilling LLMs for toxic content detection. Specifically, we design a novel prompting method named Decision-Tree-of-Thought (DToT) to bootstrap LLMs' detection performance and extract high-quality rationales. DToT can automatically select more fine-grained context to re-prompt LLMs when their responses lack confidence. Additionally, we use the rationales extracted via DToT to fine-tune student LMs. Our experimental results on various datasets demonstrate that DToT can improve the accuracy of LLMs by up to 4.6%. Furthermore, student LMs fine-tuned with rationales extracted via DToT outperform baselines on all datasets with up to 16.9\% accuracy improvement, while being more than 60x smaller than conventional LLMs. Finally, we observe that student LMs fine-tuned with rationales exhibit better cross-dataset transferability.
- Falcon-40B: an open large language model with state-of-the-art performance.
- Hatebert: Retraining bert for abusive language detection in english. arXiv preprint arXiv:2010.12472.
- ZARA: Improving Few-Shot Self-Rationalization for Small Language Models. arXiv preprint arXiv:2305.07355.
- Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416.
- Toxigen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection. arXiv preprint arXiv:2203.09509.
- Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes. arXiv preprint arXiv:2305.02301.
- Generalizable implicit hate speech detection using contrastive learning. In Proceedings of the 29th International Conference on Computational Linguistics, 6667–6679.
- Large language models are zero-shot reasoners. Advances in neural information processing systems, 35: 22199–22213.
- What Makes Good In-Context Examples for GPT-3333? arXiv preprint arXiv:2101.06804.
- SAIL: Search-Augmented Instruction Learning. arXiv preprint arXiv:2305.15225.
- Teaching small language models to reason. arXiv preprint arXiv:2212.08410.
- OpenAI. 2023. ChatGPT: get instant answers, find creative inspiration, and learn something new.
- Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.
- Social bias frames: Reasoning about social and power implications of language. arXiv preprint arXiv:1911.03891.
- Recitation-augmented language models. arXiv preprint arXiv:2210.01296.
- Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
- Learning from the worst: Dynamically generated datasets to improve online hate detection. arXiv preprint arXiv:2012.15761.
- Pinto: Faithful language reasoning using prompt-generated rationales. arXiv preprint arXiv:2211.01562.
- SCOTT: Self-consistent chain-of-thought distillation. arXiv preprint arXiv:2305.01879.
- Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171.
- Large language models are implicitly topic models: Explaining and finding good demonstrations for in-context learning. arXiv preprint arXiv:2301.11916.
- Toxicity detection with generative prompt-based inference. arXiv preprint arXiv:2205.12390.
- Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35: 24824–24837.
- Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601.
- NoisyHate: Benchmarking Content Moderation Machine Learning Models with Human-Written Perturbations Online. arXiv preprint arXiv:2303.10430.
- Interpretable unified language checking. arXiv preprint arXiv:2304.03728.
- A survey of large language models. arXiv preprint arXiv:2303.18223.
- Judging LLM-as-a-judge with MT-Bench and Chatbot Arena. arXiv:2306.05685.
- Jiang Zhang (83 papers)
- Qiong Wu (156 papers)
- Yiming Xu (64 papers)
- Cheng Cao (9 papers)
- Zheng Du (5 papers)
- Konstantinos Psounis (25 papers)