Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CMD: a framework for Context-aware Model self-Detoxification (2308.08295v3)

Published 16 Aug 2023 in cs.CL

Abstract: Text detoxification aims to minimize the risk of LLMs producing toxic content. Existing detoxification methods of directly constraining the model output or further training the model on the non-toxic corpus fail to achieve a decent balance between detoxification effectiveness and generation quality. This issue stems from the neglect of constrain imposed by the context since LLMs are designed to generate output that closely matches the context while detoxification methods endeavor to ensure the safety of the output even if it semantically deviates from the context. In view of this, we introduce a Context-aware Model self-Detoxification~(CMD) framework that pays attention to both the context and the detoxification process, i.e., first detoxifying the context and then making the LLM generate along the safe context. Specifically, CMD framework involves two phases: utilizing LLMs to synthesize data and applying these data for training. We also introduce a toxic contrastive loss that encourages the model generation away from the negative toxic samples. Experiments on various LLMs have verified the effectiveness of our MSD framework, which can yield the best performance compared to baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Zecheng Tang (19 papers)
  2. Keyan Zhou (4 papers)
  3. Juntao Li (89 papers)
  4. Yuyang Ding (13 papers)
  5. Pinzheng Wang (7 papers)
  6. Bowen Yan (24 papers)
  7. Min Zhang (630 papers)
  8. Rejie Hua (1 paper)
Citations (4)
Github Logo Streamline Icon: https://streamlinehq.com