Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reward Modeling for Mitigating Toxicity in Transformer-based Language Models (2202.09662v6)

Published 19 Feb 2022 in cs.CL and cs.AI

Abstract: Transformer-based LLMs are able to generate fluent text and be efficiently adapted across various natural language generation tasks. However, LLMs that are pretrained on large unlabeled web text corpora have been shown to suffer from degenerating toxic content and social bias behaviors, consequently hindering their safe deployment. Various detoxification methods were proposed to mitigate the LLM's toxicity; however, these methods struggled to detoxify LLMs when conditioned on prompts that contain specific social identities related to gender, race, or religion. In this study, we propose Reinforce-Detoxify; A reinforcement learning-based method for mitigating toxicity in LLMs. We address the challenge of safety in LLMs and propose a new reward model that is able to detect toxic content and mitigate unintended bias towards social identities in toxicity prediction. The experiments demonstrate that the Reinforce-Detoxify method for LLM detoxification outperforms existing detoxification approaches in automatic evaluation metrics, indicating the ability of our approach in LLM detoxification and less prone to unintended bias toward social identities in generated content.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Farshid Faal (2 papers)
  2. Ketra Schmitt (1 paper)
  3. Jia Yuan Yu (36 papers)
Citations (24)