Papers
Topics
Authors
Recent
2000 character limit reached

Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks

Published 2 Apr 2024 in cs.CR, cs.AI, cs.LG, and stat.ML | (2404.02151v4)

Abstract: We show that even the most recent safety-aligned LLMs are not robust to simple adaptive jailbreaking attacks. First, we demonstrate how to successfully leverage access to logprobs for jailbreaking: we initially design an adversarial prompt template (sometimes adapted to the target LLM), and then we apply random search on a suffix to maximize a target logprob (e.g., of the token "Sure"), potentially with multiple restarts. In this way, we achieve 100% attack success rate -- according to GPT-4 as a judge -- on Vicuna-13B, Mistral-7B, Phi-3-Mini, Nemotron-4-340B, Llama-2-Chat-7B/13B/70B, Llama-3-Instruct-8B, Gemma-7B, GPT-3.5, GPT-4o, and R2D2 from HarmBench that was adversarially trained against the GCG attack. We also show how to jailbreak all Claude models -- that do not expose logprobs -- via either a transfer or prefilling attack with a 100% success rate. In addition, we show how to use random search on a restricted set of tokens for finding trojan strings in poisoned models -- a task that shares many similarities with jailbreaking -- which is the algorithm that brought us the first place in the SaTML'24 Trojan Detection Competition. The common theme behind these attacks is that adaptivity is crucial: different models are vulnerable to different prompting templates (e.g., R2D2 is very sensitive to in-context learning prompts), some models have unique vulnerabilities based on their APIs (e.g., prefilling for Claude), and in some settings, it is crucial to restrict the token search space based on prior knowledge (e.g., for trojan detection). For reproducibility purposes, we provide the code, logs, and jailbreak artifacts in the JailbreakBench format at https://github.com/tml-epfl/LLM-adaptive-attacks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (41)
  1. Square attack: a query-efficient black-box adversarial attack via random search. In ECCV, 2020.
  2. Anthropic. The claude 3 model family: Opus, sonnet, haiku, 2023.
  3. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022.
  4. Wild patterns: ten years after the rise of adversarial machine learning. Pattern Recognition, 2018.
  5. Evasion attacks against machine learning at test time. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2013, Prague, Czech Republic, September 23-27, 2013, Proceedings, Part III 13, pp.  387–402. Springer, 2013.
  6. On evaluating adversarial robustness. arXiv preprint arXiv:1902.06705, 2019.
  7. Jailbreaking black box large language models in twenty queries. arXiv preprint arXiv:2310.08419, 2023.
  8. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https://lmsys.org/blog/2023-03-30-vicuna/.
  9. Robustbench: a standardized adversarial robustness benchmark. In NeurIPS Datasets and Benchmarks Track, 2021.
  10. Sparse-rs: a versatile framework for query-efficient sparse black-box adversarial attacks. In AAAI, 2022a.
  11. Evaluating the adversarial robustness of adaptive test-time defenses. In Proceedings of the 39th International Conference on Machine Learning, 2022b.
  12. Attacking large language models with projected gradient descent. arXiv preprint arXiv:2402.09154, 2024.
  13. Gemini Team Google. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2023.
  14. Query-based adversarial prompt generation. arXiv preprint arXiv:2402.12329, 2024.
  15. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023.
  16. Open sesame! universal black box jailbreaking of large language models. arXiv preprint arXiv:2309.01446, 2023.
  17. Autodan: Generating stealthy jailbreak prompts on aligned large language models. arXiv preprint arXiv:2310.04451, 2023.
  18. Towards deep learning models resistant to adversarial attacks. ICLR, 2018.
  19. Harmbench: A standardized evaluation framework for automated red teaming and robust refusal. arXiv preprint arXiv:2402.04249, 2024.
  20. Tree of attacks: Jailbreaking black-box llms automatically. arXiv preprint arXiv:2312.02119, 2023.
  21. Zvi Mowshowitz. Jailbreaking chatgpt on release day. https://www.lesswrong.com/posts/RYcoJdvmoBbi5Nax7/jailbreaking-chatgpt-on-release-day, 2022. Accessed: 2024-02-25.
  22. OpenAI. Openai and journalism. https://openai.com/blog/openai-and-journalism, 2023. Accessed: 2023-04-24.
  23. Universal jailbreak backdoors from poisoned human feedback. arXiv preprint arXiv:2311.14455, 2023.
  24. Find the trojan: Universal backdoor detection in aligned llms. https://github.com/ethz-spylab/rlhf_trojan_competition, 2024.
  25. Leonard Rastrigin. The convergence of the random search method in the extremal control of a many parameter system. Automaton & Remote Control, 24:1337–1342, 1963.
  26. Smoothllm: Defending large language models against jailbreaking attacks. arXiv preprint arXiv:2310.03684, 1(10), 2023.
  27. Scalable and transferable black-box jailbreaks for language models via persona modulation. arXiv preprint arXiv:2311.03348, 2023.
  28. Autoprompt: Eliciting knowledge from language models with automatically generated prompts. arXiv preprint arXiv:2010.15980, 2020.
  29. Pal: Proxy-guided black-box attack on large language models. arXiv preprint arXiv:2402.09674, 2024.
  30. Intriguing properties of neural networks. ICLR, 2014.
  31. Kazuhiro Takemoto. All in how you ask for it: Simple black-box method for jailbreak attacks. arXiv preprint arXiv:2401.09798, 2024.
  32. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
  33. On adaptive attacks to adversarial example defenses. In NeurIPS, 2020.
  34. Simplesafetytests: a test suite for identifying critical safety risks in large language models. arXiv preprint arXiv:2311.08370, 2023.
  35. Foot in the door: Understanding large language model jailbreaking via cognitive psychology. arXiv preprint arXiv:2402.15690, 2024.
  36. Jailbroken: How does llm safety training fail? NeurIPS, 2023a.
  37. Jailbreak and guard aligned language models with only few in-context demonstrations. arXiv preprint arXiv:2310.06387, 2023b.
  38. Gptfuzzer: Red teaming large language models with auto-generated jailbreak prompts. arXiv preprint arXiv:2309.10253, 2023.
  39. How johnny can persuade llms to jailbreak them: Rethinking persuasion to challenge ai safety by humanizing llms. arXiv preprint arXiv:2401.06373, 2024.
  40. Autodan: Automatic and interpretable adversarial attacks on large language models. arXiv preprint arXiv:2310.15140, 2023.
  41. Universal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043, 2023.
Citations (87)

Summary

  • The paper demonstrates that simple adaptive attacks can bypass safety-aligned LLMs with near 100% success.
  • It employs a universal adversarial prompt and random search optimization, leveraging model log probabilities and transfer attacks.
  • The study highlights the need for enhanced defense mechanisms and adaptive evaluations against evolving adversarial strategies.

Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks

The paper "Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks" (2404.02151) systematically explores the vulnerability of state-of-the-art safety-aligned LLMs to adaptive jailbreaking attacks. This study proposes adaptive and straightforward techniques that effectively compromise the safety mechanisms intended to protect LLMs from responding in a harmful manner to adversarial prompts.

Introduction

The advent of LLMs has brought to the fore the dual-use nature of such technologies, where potential misuse can result in generating harmful content, propagating misinformation, and supporting undesirable behaviors. To counteract these risks, safety-aligned approaches, including fine-tuning aligned to human safety judgments, have been developed. Despite the increasing deployment of these safety measures, adversarial prompts continue to challenge their robustness, underpinning the necessity for ever-evolving defense mechanisms (2404.02151).

Jailbreaking attacks are distinguished by their level of access to the model. They can be broadly categorized into white-box, black-box, and API-only access methods, with varying levels of complexity. Some rely on manual prompt crafting, while others employ optimization techniques or even leverage auxiliary LLMs. The attacks can range from simple insertion of gibberish to the naturalistic rephrasing of prompts [mowshowitz2023jailbreaking] [chao2023jailbreaking]. Despite the robustness claims of certain models like Llama-2-Chat [touvron2023llama2], this research demonstrates that adaptive strategies can surpass existing non-adaptive attacks across multiple top safety-aligned LLMs. Figure 1

Figure 1: Successful transfer attack on Claude 3 Sonnet. We show an illustrative example using temperature zero with an adversarial suffix generated on GPT-4 leveraging access to its logprobs. We observe that one can directly ask follow-up requests to detail some steps generated in the first response to get much more information.

Methodology

Methodology Overview

The approach centers on leveraging an understanding of the target model's internal mechanisms, such as log probabilities, to execute precise attacks. This study designs a universal adversarial prompt—a standard base template augmented by a random search of suffix tokens to increase the likelihood of producing a desired model response. For models without logprob access—like Claude—a transfer attack is conducted by tailoring an adversarial suffix using GPT-4. The methodology emphasizes the adaptivity of attacks to specific target model vulnerabilities. Figure 1

Figure 1: Successful transfer attack on Claude 3 Sonnet using temperature zero with an adversarial suffix generated on GPT-4 leveraging access to its logprobs. Note that one can directly ask follow-up requests to detail some steps generated in the first response to get much more information. The upper part of the user prompt is cropped.

Random Search and Adaptive Attack Approaches

One significant component of the proposed strategy involves random search (RS) optimization. Despite some models providing access only to top-k log probabilities for generated tokens, enabling straightforward RS-based optimization, other models, like Claude, limit access, prohibiting direct RS application. The research demonstrated the potential for conducting transfer and prefill attacks with high success rates, such as a 100% success rate in most cases, by leveraging the prefilling feature for specific models (2404.02151).

Non-Determinism of GPT Models

A notable discovery is the inherently non-deterministic nature of GPT models, including GPT-4 Turbo, as demonstrated by the histogram of log probabilities for the initial token in a response when the same query is repeated multiple times with identical settings. Despite this variability, random search still proves effective, achieving high attack success rates. Figure 2

Figure 2

Figure 2

Figure 2

Figure 2: Non-determinism of GPT models. The histogram of log-probabilities for the first response token using the same query repeated 1,000 times for GPT-4 Turbo, illustrating the non-deterministic nature of log-probability outputs even with a fixed seed parameter and temperature zero.

Implications and Recommendations

This research reveals current safety-aligned LLMs' vulnerability to tailored adversarial prompts, emphasizing the necessity for advanced adaptive mechanisms in safety evaluations. Future defensive strategies must incorporate such adaptive attacks to construct robust guardrails against jailbreaking attempts. This work posits that advancing beyond scale and data is crucial for the robust safety alignment of LLMs.

Conclusion

The analysis highlights that sophisticated models, despite state-of-the-art safety alignment, remain vulnerable to adaptive prompts capable of inducing almost a 100% attack success rate across various LLMs. The study emphasizes that evaluating LLM robustness against jailbreaking requires the implementation of adaptive attack methodologies that exploit model-specific characteristics. The proposed adaptive attack strategy extends to the domain of Trojan detection, securing the first place in the SaTML'24 Trojan Detection Competition. Indeed, modifying the search space for adversarial tokens significantly enhances attack success rates across various safety-aligned LLMs, demonstrating the importance of adaptive methods in these contexts. Future research should prioritize the advancement of both offensive and defensive strategies in LLM safety evaluations to address the intrinsic adversarial vulnerabilities of these LLMs. This not only ensures the efficacy of the defenses but also aligns with ethical considerations, anticipating future uses and potential misuse scenarios, ultimately contributing to safer AI developments.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We found no open problems mentioned in this paper.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 42 tweets with 823 likes about this paper.