Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SelfDefend: LLMs Can Defend Themselves against Jailbreaking in a Practical Manner (2406.05498v2)

Published 8 Jun 2024 in cs.CR and cs.AI

Abstract: Jailbreaking is an emerging adversarial attack that bypasses the safety alignment deployed in off-the-shelf LLMs and has evolved into multiple categories: human-based, optimization-based, generation-based, and the recent indirect and multilingual jailbreaks. However, delivering a practical jailbreak defense is challenging because it needs to not only handle all the above jailbreak attacks but also incur negligible delays to user prompts, as well as be compatible with both open-source and closed-source LLMs. Inspired by how the traditional security concept of shadow stacks defends against memory overflow attacks, this paper introduces a generic LLM jailbreak defense framework called SelfDefend, which establishes a shadow LLM as a defense instance to concurrently protect the target LLM instance in the normal stack and collaborate with it for checkpoint-based access control. The effectiveness of SelfDefend builds upon our observation that existing LLMs (both target and defense LLMs) have the capability to identify harmful prompts or intentions in user queries, which we empirically validate using the commonly used GPT-3.5/4 models across all major jailbreak attacks. To further improve the defense's robustness and minimize costs, we employ a data distillation approach to tune dedicated open-source defense models. These models outperform six state-of-the-art defenses and match the performance of GPT-4-based SelfDefend, with significantly lower extra delays. We also empirically show that the tuned models are robust to adaptive jailbreaks and prompt injections.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Xunguang Wang (10 papers)
  2. Daoyuan Wu (39 papers)
  3. Zhenlan Ji (11 papers)
  4. Zongjie Li (29 papers)
  5. Pingchuan Ma (90 papers)
  6. Shuai Wang (466 papers)
  7. Yingjiu Li (13 papers)
  8. Yang Liu (2253 papers)
  9. Ning Liu (199 papers)
  10. Juergen Rahmel (1 paper)
Citations (2)
Youtube Logo Streamline Icon: https://streamlinehq.com