Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploiting Uncommon Text-Encoded Structures for Automated Jailbreaks in LLMs (2406.08754v2)

Published 13 Jun 2024 in cs.CL and cs.CR

Abstract: LLMs are widely used in natural language processing but face the risk of jailbreak attacks that maliciously induce them to generate harmful content. Existing jailbreak attacks, including character-level and context-level attacks, mainly focus on the prompt of the plain text without specifically exploring the significant influence of its structure. In this paper, we focus on studying how prompt structure contributes to the jailbreak attack. We introduce a novel structure-level attack method based on tail structures that are rarely used during LLM training, which we refer to as Uncommon Text-Encoded Structure (UTES). We extensively study 12 UTESs templates and 6 obfuscation methods to build an effective automated jailbreak tool named StructuralSleight that contains three escalating attack strategies: Structural Attack, Structural and Character/Context Obfuscation Attack, and Fully Obfuscated Structural Attack. Extensive experiments on existing LLMs show that StructuralSleight significantly outperforms baseline methods. In particular, the attack success rate reaches 94.62\% on GPT-4o, which has not been addressed by state-of-the-art techniques.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Bangxin Li (1 paper)
  2. Hengrui Xing (3 papers)
  3. Chao Huang (244 papers)
  4. Jin Qian (10 papers)
  5. Huangqing Xiao (2 papers)
  6. Linfeng Feng (7 papers)
  7. Cong Tian (21 papers)
Citations (2)