Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks
Abstract: We show that even the most recent safety-aligned LLMs are not robust to simple adaptive jailbreaking attacks. First, we demonstrate how to successfully leverage access to logprobs for jailbreaking: we initially design an adversarial prompt template (sometimes adapted to the target LLM), and then we apply random search on a suffix to maximize a target logprob (e.g., of the token "Sure"), potentially with multiple restarts. In this way, we achieve 100% attack success rate -- according to GPT-4 as a judge -- on Vicuna-13B, Mistral-7B, Phi-3-Mini, Nemotron-4-340B, Llama-2-Chat-7B/13B/70B, Llama-3-Instruct-8B, Gemma-7B, GPT-3.5, GPT-4o, and R2D2 from HarmBench that was adversarially trained against the GCG attack. We also show how to jailbreak all Claude models -- that do not expose logprobs -- via either a transfer or prefilling attack with a 100% success rate. In addition, we show how to use random search on a restricted set of tokens for finding trojan strings in poisoned models -- a task that shares many similarities with jailbreaking -- which is the algorithm that brought us the first place in the SaTML'24 Trojan Detection Competition. The common theme behind these attacks is that adaptivity is crucial: different models are vulnerable to different prompting templates (e.g., R2D2 is very sensitive to in-context learning prompts), some models have unique vulnerabilities based on their APIs (e.g., prefilling for Claude), and in some settings, it is crucial to restrict the token search space based on prior knowledge (e.g., for trojan detection). For reproducibility purposes, we provide the code, logs, and jailbreak artifacts in the JailbreakBench format at https://github.com/tml-epfl/LLM-adaptive-attacks.
- Square attack: a query-efficient black-box adversarial attack via random search. In ECCV, 2020.
- Anthropic. The claude 3 model family: Opus, sonnet, haiku, 2023.
- Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022.
- Wild patterns: ten years after the rise of adversarial machine learning. Pattern Recognition, 2018.
- Evasion attacks against machine learning at test time. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2013, Prague, Czech Republic, September 23-27, 2013, Proceedings, Part III 13, pp. 387–402. Springer, 2013.
- On evaluating adversarial robustness. arXiv preprint arXiv:1902.06705, 2019.
- Jailbreaking black box large language models in twenty queries. arXiv preprint arXiv:2310.08419, 2023.
- Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https://lmsys.org/blog/2023-03-30-vicuna/.
- Robustbench: a standardized adversarial robustness benchmark. In NeurIPS Datasets and Benchmarks Track, 2021.
- Sparse-rs: a versatile framework for query-efficient sparse black-box adversarial attacks. In AAAI, 2022a.
- Evaluating the adversarial robustness of adaptive test-time defenses. In Proceedings of the 39th International Conference on Machine Learning, 2022b.
- Attacking large language models with projected gradient descent. arXiv preprint arXiv:2402.09154, 2024.
- Gemini Team Google. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2023.
- Query-based adversarial prompt generation. arXiv preprint arXiv:2402.12329, 2024.
- Mistral 7b. arXiv preprint arXiv:2310.06825, 2023.
- Open sesame! universal black box jailbreaking of large language models. arXiv preprint arXiv:2309.01446, 2023.
- Autodan: Generating stealthy jailbreak prompts on aligned large language models. arXiv preprint arXiv:2310.04451, 2023.
- Towards deep learning models resistant to adversarial attacks. ICLR, 2018.
- Harmbench: A standardized evaluation framework for automated red teaming and robust refusal. arXiv preprint arXiv:2402.04249, 2024.
- Tree of attacks: Jailbreaking black-box llms automatically. arXiv preprint arXiv:2312.02119, 2023.
- Zvi Mowshowitz. Jailbreaking chatgpt on release day. https://www.lesswrong.com/posts/RYcoJdvmoBbi5Nax7/jailbreaking-chatgpt-on-release-day, 2022. Accessed: 2024-02-25.
- OpenAI. Openai and journalism. https://openai.com/blog/openai-and-journalism, 2023. Accessed: 2023-04-24.
- Universal jailbreak backdoors from poisoned human feedback. arXiv preprint arXiv:2311.14455, 2023.
- Find the trojan: Universal backdoor detection in aligned llms. https://github.com/ethz-spylab/rlhf_trojan_competition, 2024.
- Leonard Rastrigin. The convergence of the random search method in the extremal control of a many parameter system. Automaton & Remote Control, 24:1337–1342, 1963.
- Smoothllm: Defending large language models against jailbreaking attacks. arXiv preprint arXiv:2310.03684, 1(10), 2023.
- Scalable and transferable black-box jailbreaks for language models via persona modulation. arXiv preprint arXiv:2311.03348, 2023.
- Autoprompt: Eliciting knowledge from language models with automatically generated prompts. arXiv preprint arXiv:2010.15980, 2020.
- Pal: Proxy-guided black-box attack on large language models. arXiv preprint arXiv:2402.09674, 2024.
- Intriguing properties of neural networks. ICLR, 2014.
- Kazuhiro Takemoto. All in how you ask for it: Simple black-box method for jailbreak attacks. arXiv preprint arXiv:2401.09798, 2024.
- Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
- On adaptive attacks to adversarial example defenses. In NeurIPS, 2020.
- Simplesafetytests: a test suite for identifying critical safety risks in large language models. arXiv preprint arXiv:2311.08370, 2023.
- Foot in the door: Understanding large language model jailbreaking via cognitive psychology. arXiv preprint arXiv:2402.15690, 2024.
- Jailbroken: How does llm safety training fail? NeurIPS, 2023a.
- Jailbreak and guard aligned language models with only few in-context demonstrations. arXiv preprint arXiv:2310.06387, 2023b.
- Gptfuzzer: Red teaming large language models with auto-generated jailbreak prompts. arXiv preprint arXiv:2309.10253, 2023.
- How johnny can persuade llms to jailbreak them: Rethinking persuasion to challenge ai safety by humanizing llms. arXiv preprint arXiv:2401.06373, 2024.
- Autodan: Automatic and interpretable adversarial attacks on large language models. arXiv preprint arXiv:2310.15140, 2023.
- Universal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043, 2023.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.


