Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-Detoxifying Language Models via Toxification Reversal (2310.09573v1)

Published 14 Oct 2023 in cs.CL

Abstract: LLM detoxification aims to minimize the risk of generating offensive or harmful content in pretrained LLMs (PLMs) for safer deployment. Existing methods can be roughly categorized as finetuning-based and decoding-based. However, the former is often resource-intensive, while the latter relies on additional components and potentially compromises the generation fluency. In this paper, we propose a more lightweight approach that enables the PLM itself to achieve "self-detoxification". Our method is built upon the observation that prepending a negative steering prompt can effectively induce PLMs to generate toxic content. At the same time, we are inspired by the recent research in the interpretability field, which formulates the evolving contextualized representations within the PLM as an information stream facilitated by the attention layers. Drawing on this idea, we devise a method to identify the toxification direction from the normal generation process to the one prompted with the negative prefix, and then steer the generation to the reversed direction by manipulating the information movement within the attention layers. Experimental results show that our approach, without any fine-tuning or extra components, can achieve comparable performance with state-of-the-art methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Chak Tou Leong (22 papers)
  2. Yi Cheng (78 papers)
  3. Jiashuo Wang (19 papers)
  4. Jian Wang (967 papers)
  5. Wenjie Li (183 papers)
Citations (24)

Summary

We haven't generated a summary for this paper yet.