Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-Guard: Empower the LLM to Safeguard Itself (2310.15851v2)

Published 24 Oct 2023 in cs.CL

Abstract: The jailbreak attack can bypass the safety measures of a LLM, generating harmful content. This misuse of LLM has led to negative societal consequences. Currently, there are two main approaches to address jailbreak attacks: safety training and safeguards. Safety training focuses on further training LLM to enhance its safety. On the other hand, safeguards involve implementing external models or filters to prevent harmful outputs. However, safety training has constraints in its ability to adapt to new attack types and often leads to a drop in model performance. Safeguards have proven to be of limited help. To tackle these issues, we propose a novel approach called Self-Guard, which combines the strengths of both safety methods. Self-Guard includes two stages. In the first stage, we enhance the model's ability to assess harmful content, and in the second stage, we instruct the model to consistently perform harmful content detection on its own responses. The experiment has demonstrated that Self-Guard is robust against jailbreak attacks. In the bad case analysis, we find that LLM occasionally provides harmless responses to harmful queries. Additionally, we evaluated the general capabilities of the LLM before and after safety training, providing evidence that Self-Guard does not result in the LLM's performance degradation. In sensitivity tests, Self-Guard not only avoids inducing over-sensitivity in LLM but also can even mitigate this issue.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Zezhong Wang (30 papers)
  2. Fangkai Yang (45 papers)
  3. Lu Wang (329 papers)
  4. Pu Zhao (82 papers)
  5. Hongru Wang (62 papers)
  6. Liang Chen (360 papers)
  7. Qingwei Lin (81 papers)
  8. Kam-Fai Wong (92 papers)
Citations (18)
Reddit Logo Streamline Icon: https://streamlinehq.com