Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

xJailbreak: Representation Space Guided Reinforcement Learning for Interpretable LLM Jailbreaking (2501.16727v2)

Published 28 Jan 2025 in cs.CL

Abstract: Safety alignment mechanism are essential for preventing LLMs from generating harmful information or unethical content. However, cleverly crafted prompts can bypass these safety measures without accessing the model's internal parameters, a phenomenon known as black-box jailbreak. Existing heuristic black-box attack methods, such as genetic algorithms, suffer from limited effectiveness due to their inherent randomness, while recent reinforcement learning (RL) based methods often lack robust and informative reward signals. To address these challenges, we propose a novel black-box jailbreak method leveraging RL, which optimizes prompt generation by analyzing the embedding proximity between benign and malicious prompts. This approach ensures that the rewritten prompts closely align with the intent of the original prompts while enhancing the attack's effectiveness. Furthermore, we introduce a comprehensive jailbreak evaluation framework incorporating keywords, intent matching, and answer validation to provide a more rigorous and holistic assessment of jailbreak success. Experimental results show the superiority of our approach, achieving state-of-the-art (SOTA) performance on several prominent open and closed-source LLMs, including Qwen2.5-7B-Instruct, Llama3.1-8B-Instruct, and GPT-4o-0806. Our method sets a new benchmark in jailbreak attack effectiveness, highlighting potential vulnerabilities in LLMs. The codebase for this work is available at https://github.com/Aegis1863/xJailbreak.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Sunbowen Lee (6 papers)
  2. Shiwen Ni (34 papers)
  3. Chi Wei (4 papers)
  4. Shuaimin Li (6 papers)
  5. Liyang Fan (5 papers)
  6. Ahmadreza Argha (8 papers)
  7. Hamid Alinejad-Rokny (25 papers)
  8. Ruifeng Xu (66 papers)
  9. Yicheng Gong (6 papers)
  10. Min Yang (239 papers)