Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks (2404.02151v3)

Published 2 Apr 2024 in cs.CR, cs.AI, cs.LG, and stat.ML
Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks

Abstract: We show that even the most recent safety-aligned LLMs are not robust to simple adaptive jailbreaking attacks. First, we demonstrate how to successfully leverage access to logprobs for jailbreaking: we initially design an adversarial prompt template (sometimes adapted to the target LLM), and then we apply random search on a suffix to maximize a target logprob (e.g., of the token "Sure"), potentially with multiple restarts. In this way, we achieve 100% attack success rate -- according to GPT-4 as a judge -- on Vicuna-13B, Mistral-7B, Phi-3-Mini, Nemotron-4-340B, Llama-2-Chat-7B/13B/70B, Llama-3-Instruct-8B, Gemma-7B, GPT-3.5, GPT-4o, and R2D2 from HarmBench that was adversarially trained against the GCG attack. We also show how to jailbreak all Claude models -- that do not expose logprobs -- via either a transfer or prefilling attack with a 100% success rate. In addition, we show how to use random search on a restricted set of tokens for finding trojan strings in poisoned models -- a task that shares many similarities with jailbreaking -- which is the algorithm that brought us the first place in the SaTML'24 Trojan Detection Competition. The common theme behind these attacks is that adaptivity is crucial: different models are vulnerable to different prompting templates (e.g., R2D2 is very sensitive to in-context learning prompts), some models have unique vulnerabilities based on their APIs (e.g., prefilling for Claude), and in some settings, it is crucial to restrict the token search space based on prior knowledge (e.g., for trojan detection). For reproducibility purposes, we provide the code, logs, and jailbreak artifacts in the JailbreakBench format at https://github.com/tml-epfl/LLM-adaptive-attacks.

Jailbreaking Safety-Aligned LLMs through Adaptive Attacks

Introduction

In recent developments within the field of LLMs, the robustness of safety-aligned models against adaptive adversarial attacks has emerged as a crucial research focus. A comprehensive paper demonstrates that leading safety-aligned LLMs, including those from OpenAI (GPT-3.5 and GPT-4), Meta (Llama-2-Chat series), Google (Gemma-7B), Anthropic (Claude models), and CAIS (R2D2), are vulnerable to simple but cleverly designed adaptive jailbreaking attacks. This paper systematically evaluates the efficacy of different adversarial strategies, highlighting a near-universal susceptibility of these models to being manipulated into generating harmful or prohibited content.

Methodology

The researchers adopted a multifaceted approach for jailbreaking attacks, incorporating:

  • Adversarial Prompt Design: A universal template was crafted for each model or family of models, designed to evade inbuilt safety mechanisms.
  • Random Search (RS): An RS algorithm was employed to optimize a suffix appended to the inquiry, aiming to manipulate the model into generating predefined harmful outputs.
  • Adaptive Techniques: Strategies were tailored to exploit unique vulnerabilities, such as model-specific in-context learning sensitivities and API features like prefilling responses in Claude models.

Findings

The paper reveals a strikingly high success rate in bypassing the safety measures of various LLMs. Notably, the amalgamation of adversarial prompting and RS resulted in achieving a 100\% attack success rate across a wide array of models. The research underscores the critical role of adaptiveness in formulating successful jailbreaks, with different models exhibiting distinct vulnerabilities to specific strategic adjustments.

For instance, the Llama-2 series, despite its robustness against standard attacks, was effectively compromised using a combination of tailored prompting and RS, augmented by a novel self-transfer technique. Similarly, Claude models, known for their stringent safety protocols, were breached using a transfer approach alongside a model-specific vulnerability exploitation via the prefilling feature.

The paper further extends its exploration into the field of poisoned models, showcasing how a constrained RS approach, tailored through intelligent token selection, can identify hidden trojan triggers, thereby facilitating first-place achievement in the SaTML'24 Trojan Detection Competition.

Implications

This research casts a spotlight on the existing vulnerability landscape of safety-aligned LLMs and calls for a reevaluation of current defense mechanisms. It suggests that no single method offers a panacea against adaptive attacks, underscoring the necessity for a more dynamic and encompassing approach to evaluating and bolstering model robustness. The findings serve as a valuable asset for future endeavors aimed at designing more resilient and trustworthy LLMs.

Outlook and Recommendations

The paper concludes with recommendations for advancing adversarial attack methodologies, advocating for a combination of manual prompt optimization, standard optimization techniques, and the exploitation of model-specific vulnerabilities. It emphasizes the importance of devising a blend of static and adaptive strategies for a comprehensive assessment of LLM robustness.

Moreover, the researchers project that their techniques could extend beyond conventional jailbreaking scenarios, potentially impacting areas such as copyright infringement detection and system hijacking through prompt injections. This underlines the imperative for ongoing research into developing more sophisticated defenses in the ever-evolving arms race between LLM capabilities and adversarial threats.

Concluding Remarks

In summary, this paper provides a critical examination of the vulnerabilities of leading safety-aligned LLMs to adaptive adversarial attacks. Through meticulous analysis and innovative attack strategies, it highlights the need for the AI community to adopt a more holistic view of model security and integrity. The insights garnered here pave the way for future research dedicated to ensuring the ethical and safe deployment of LLMs in society.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (41)
  1. Square attack: a query-efficient black-box adversarial attack via random search. In ECCV, 2020.
  2. Anthropic. The claude 3 model family: Opus, sonnet, haiku, 2023.
  3. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022.
  4. Wild patterns: ten years after the rise of adversarial machine learning. Pattern Recognition, 2018.
  5. Evasion attacks against machine learning at test time. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2013, Prague, Czech Republic, September 23-27, 2013, Proceedings, Part III 13, pp.  387–402. Springer, 2013.
  6. On evaluating adversarial robustness. arXiv preprint arXiv:1902.06705, 2019.
  7. Jailbreaking black box large language models in twenty queries. arXiv preprint arXiv:2310.08419, 2023.
  8. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https://lmsys.org/blog/2023-03-30-vicuna/.
  9. Robustbench: a standardized adversarial robustness benchmark. In NeurIPS Datasets and Benchmarks Track, 2021.
  10. Sparse-rs: a versatile framework for query-efficient sparse black-box adversarial attacks. In AAAI, 2022a.
  11. Evaluating the adversarial robustness of adaptive test-time defenses. In Proceedings of the 39th International Conference on Machine Learning, 2022b.
  12. Attacking large language models with projected gradient descent. arXiv preprint arXiv:2402.09154, 2024.
  13. Gemini Team Google. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2023.
  14. Query-based adversarial prompt generation. arXiv preprint arXiv:2402.12329, 2024.
  15. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023.
  16. Open sesame! universal black box jailbreaking of large language models. arXiv preprint arXiv:2309.01446, 2023.
  17. Autodan: Generating stealthy jailbreak prompts on aligned large language models. arXiv preprint arXiv:2310.04451, 2023.
  18. Towards deep learning models resistant to adversarial attacks. ICLR, 2018.
  19. Harmbench: A standardized evaluation framework for automated red teaming and robust refusal. arXiv preprint arXiv:2402.04249, 2024.
  20. Tree of attacks: Jailbreaking black-box llms automatically. arXiv preprint arXiv:2312.02119, 2023.
  21. Zvi Mowshowitz. Jailbreaking chatgpt on release day. https://www.lesswrong.com/posts/RYcoJdvmoBbi5Nax7/jailbreaking-chatgpt-on-release-day, 2022. Accessed: 2024-02-25.
  22. OpenAI. Openai and journalism. https://openai.com/blog/openai-and-journalism, 2023. Accessed: 2023-04-24.
  23. Universal jailbreak backdoors from poisoned human feedback. arXiv preprint arXiv:2311.14455, 2023.
  24. Find the trojan: Universal backdoor detection in aligned llms. https://github.com/ethz-spylab/rlhf_trojan_competition, 2024.
  25. Leonard Rastrigin. The convergence of the random search method in the extremal control of a many parameter system. Automaton & Remote Control, 24:1337–1342, 1963.
  26. Smoothllm: Defending large language models against jailbreaking attacks. arXiv preprint arXiv:2310.03684, 1(10), 2023.
  27. Scalable and transferable black-box jailbreaks for language models via persona modulation. arXiv preprint arXiv:2311.03348, 2023.
  28. Autoprompt: Eliciting knowledge from language models with automatically generated prompts. arXiv preprint arXiv:2010.15980, 2020.
  29. Pal: Proxy-guided black-box attack on large language models. arXiv preprint arXiv:2402.09674, 2024.
  30. Intriguing properties of neural networks. ICLR, 2014.
  31. Kazuhiro Takemoto. All in how you ask for it: Simple black-box method for jailbreak attacks. arXiv preprint arXiv:2401.09798, 2024.
  32. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
  33. On adaptive attacks to adversarial example defenses. In NeurIPS, 2020.
  34. Simplesafetytests: a test suite for identifying critical safety risks in large language models. arXiv preprint arXiv:2311.08370, 2023.
  35. Foot in the door: Understanding large language model jailbreaking via cognitive psychology. arXiv preprint arXiv:2402.15690, 2024.
  36. Jailbroken: How does llm safety training fail? NeurIPS, 2023a.
  37. Jailbreak and guard aligned language models with only few in-context demonstrations. arXiv preprint arXiv:2310.06387, 2023b.
  38. Gptfuzzer: Red teaming large language models with auto-generated jailbreak prompts. arXiv preprint arXiv:2309.10253, 2023.
  39. How johnny can persuade llms to jailbreak them: Rethinking persuasion to challenge ai safety by humanizing llms. arXiv preprint arXiv:2401.06373, 2024.
  40. Autodan: Automatic and interpretable adversarial attacks on large language models. arXiv preprint arXiv:2310.15140, 2023.
  41. Universal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Maksym Andriushchenko (33 papers)
  2. Francesco Croce (34 papers)
  3. Nicolas Flammarion (63 papers)
Citations (87)
Youtube Logo Streamline Icon: https://streamlinehq.com