Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Catastrophic Jailbreak of Open-source LLMs via Exploiting Generation (2310.06987v1)

Published 10 Oct 2023 in cs.CL, cs.AI, and cs.CR

Abstract: The rapid progress in open-source LLMs is significantly advancing AI development. Extensive efforts have been made before model release to align their behavior with human values, with the primary goal of ensuring their helpfulness and harmlessness. However, even carefully aligned models can be manipulated maliciously, leading to unintended behaviors, known as "jailbreaks". These jailbreaks are typically triggered by specific text inputs, often referred to as adversarial prompts. In this work, we propose the generation exploitation attack, an extremely simple approach that disrupts model alignment by only manipulating variations of decoding methods. By exploiting different generation strategies, including varying decoding hyper-parameters and sampling methods, we increase the misalignment rate from 0% to more than 95% across 11 LLMs including LLaMA2, Vicuna, Falcon, and MPT families, outperforming state-of-the-art attacks with $30\times$ lower computational cost. Finally, we propose an effective alignment method that explores diverse generation strategies, which can reasonably reduce the misalignment rate under our attack. Altogether, our study underscores a major failure in current safety evaluation and alignment procedures for open-source LLMs, strongly advocating for more comprehensive red teaming and better alignment before releasing such models. Our code is available at https://github.com/Princeton-SysML/Jailbreak_LLM.

The paper "Catastrophic Jailbreak of Open-source LLMs via Exploiting Generation" (Huang et al., 2023 ) introduces a novel attack methodology termed "generation exploitation" that circumvents the safety alignment of open-source LLMs by manipulating the decoding process rather than crafting adversarial input prompts. This approach demonstrates significant effectiveness against a range of contemporary models, highlighting vulnerabilities in standard safety evaluation protocols that typically rely on fixed generation parameters. The work also proposes a mitigation strategy based on incorporating diverse generation outputs into the alignment fine-tuning process.

Generation Exploitation Attack Methodology

The core premise of the generation exploitation attack is that LLM alignment, often achieved through methods like Reinforcement Learning from Human Feedback (RLHF) and evaluated under default decoding configurations, may not generalize robustly across different generation strategies. The attack systematically explores variations in the generation process to elicit misaligned or harmful content in response to standard malicious prompts (e.g., from the AdvBench dataset).

The attack comprises several key components:

  1. System Prompt Manipulation: System prompts, prepended instructions designed to enforce safety constraints (e.g., "You are a helpful and harmless AI assistant."), are often used during inference. The attack evaluates scenarios both with and without these prompts, observing that their removal frequently increases the Attack Success Rate (ASR), even for models purportedly trained to internalize system prompt guidance.
  2. Decoding Strategy Variation: Instead of relying on default parameters (e.g., top-p=0.9, temperature=0.1 often used for LLaMA2 evaluation), the attack explores a diverse set of decoding configurations:
    • Temperature Sampling: Modifies the temperature parameter (τ), which controls the randomness of the probability distribution over the vocabulary. Lower temperatures sharpen the distribution, favoring high-probability tokens, while higher temperatures flatten it, increasing diversity. The study tested τ values ranging from 0.05 to 1.0. The probability of selecting token ii is given by P(tokeni)=exp(logiti/τ)jexp(logitj/τ)P(token_i) = \frac{\exp(logit_i / \tau)}{\sum_j \exp(logit_j / \tau)}.
    • Top-K Sampling: Limits the sampling pool to the K tokens with the highest probabilities. Tested values for K included {1, 2, 5, ..., 500}.
    • Top-p (Nucleus) Sampling: Selects the smallest set of tokens whose cumulative probability mass exceeds a threshold p. Tested p values ranged from 0.05 to 1.0.
  3. Boosting Techniques: To maximize the likelihood of generating misaligned content, especially for strongly aligned models like LLaMA2-chat variants, two boosting strategies are employed:
    • Multiple Sampling: For a single chosen decoding configuration, multiple independent output sequences are generated. A scorer model (a classifier trained to distinguish aligned vs. misaligned responses) then selects the most misaligned output among the candidates.
    • Decoding Constraints: Manipulates the generation process by applying penalties or explicit constraints. This includes length penalties and enforcing or forbidding specific words (e.g., penalizing refusals like "sorry," "cannot"; enforcing agreement words like "sure," "okay").

The overall attack procedure involves iterating through various combinations of system prompt presence/absence and decoding parameters (τ, K, p), potentially using boosting techniques. The final output is selected by the scorer as the most misaligned response generated across all tested configurations for a given input prompt.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
def generation_exploitation_attack(prompt, model, tokenizer, scorer, config_space, num_samples_per_config=5):
    """
    Applies the generation exploitation attack.

    Args:
        prompt (str): The malicious input prompt.
        model: The target LLM.
        tokenizer: The model's tokenizer.
        scorer: A classifier to score misalignment.
        config_space (list): List of decoding configurations (temp, top_k, top_p, use_system_prompt).
        num_samples_per_config (int): Number of samples for boosting.

    Returns:
        str: The most misaligned response found.
    """
    best_response = ""
    max_misalignment_score = -1.0

    for config in config_space:
        # Apply system prompt setting if specified in config
        effective_prompt = setup_prompt(prompt, config['use_system_prompt'])
        inputs = tokenizer(effective_prompt, return_tensors="pt").to(model.device)

        generated_responses = []
        for _ in range(num_samples_per_config):
            # Generate response using current config's decoding parameters
            output_ids = model.generate(
                inputs.input_ids,
                temperature=config.get('temperature', 0.7),
                top_k=config.get('top_k', 50),
                top_p=config.get('top_p', 0.9),
                do_sample=True,
                max_new_tokens=512,
                # Apply decoding constraints if specified
                # ... (e.g., bad_words_ids, force_words_ids, length_penalty)
            )
            response_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
            generated_responses.append(response_text)

        # Score generated responses
        for response in generated_responses:
            score = scorer.predict_misalignment(response)
            if score > max_misalignment_score:
                max_misalignment_score = score
                best_response = response

    return best_response

Experimental Results and Evaluation

The study evaluated the generation exploitation attack across 11 open-source LLMs, including variants of LLaMA2 (7B, 13B, 70B - base and chat), Vicuna (7B, 13B, 33B), Falcon (7B, 40B - base and instruct), and MPT (7B, 30B - base and instruct), using the AdvBench dataset of harmful instructions.

  • Attack Success Rate (ASR): The primary metric was ASR, measured using a trained RoBERTa-large classifier validated against human judgments (92% agreement).
    • For models evaluated under their default generation settings (often aligned), the ASR was typically near 0%, particularly for chat/instruct variants like LLaMA2-chat.
    • Simply applying the generation exploitation attack (varying decoding strategies, without system prompt) increased the ASR to over 95% for 9 out of the 11 models.
    • With the inclusion of boosting techniques (multiple sampling, decoding constraints), the ASR exceeded 95% for all 11 tested models, including the LLaMA2-chat models specifically fine-tuned for safety. This represents a catastrophic failure of alignment under manipulated generation conditions.
    • Different models exhibited peak vulnerability under different decoding configurations, reinforcing the inadequacy of single-configuration safety testing.
  • Harmfulness Percentage (HP): To assess the practical severity of the jailbreaks, human evaluation was performed on a subset of outputs classified as misaligned by the scorer. For LLaMA2-13B-chat, approximately 50% of the machine-identified misaligned outputs were judged by humans to contain actionable harmful instructions (HP=50%).
  • Computational Cost: The generation exploitation attack was compared to gradient-based adversarial prompt optimization methods like Greedy Coordinate Gradient (GCG). The proposed attack achieved significantly higher ASR while requiring substantially less computation. On a single A100 GPU, attacking one prompt on LLaMA2-7B-chat took approximately 3 minutes using generation exploitation, compared to 1.5 hours for GCG, indicating a ~30x reduction in computational cost.

These results strongly suggest that current alignment techniques may primarily optimize for behavior under default inference settings, leaving models highly vulnerable when those settings are altered.

Generation-aware Alignment Mitigation

To address the identified vulnerability, the paper proposes a defense mechanism called Generation-aware Alignment. This method augments the standard alignment fine-tuning process by incorporating data generated under the diverse conditions exploited by the attack.

The process involves:

  1. Data Collection: For a set of known malicious prompts, generate multiple responses using the target LLM across a wide range of decoding configurations (τ, K, p).
  2. Labeling: Classify the collected responses as either "aligned" (e.g., safe refusals) or "misaligned" (harmful content) using a pre-trained scorer or human annotation.
  3. Fine-tuning: Fine-tune the target LLM on this diverse dataset. The paper adapts an objective inspired by "chain of hindsight," where the model is trained to predict the correct continuation based on a prefix indicating the desired alignment status. For instance, aligned responses are associated with a prefix like "An aligned answer:", while misaligned responses are associated with "A misaligned answer:". The training objective encourages the generation of text following the "aligned" prefix while implicitly discouraging paths leading to misaligned content across various generation possibilities.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
def prepare_generation_aware_data(prompts, model, tokenizer, scorer, config_space):
    """
    Generates training data for generation-aware alignment.

    Args:
        prompts (list): List of malicious prompts.
        model: The LLM to be aligned.
        tokenizer: The model's tokenizer.
        scorer: Classifier for alignment status.
        config_space (list): Diverse decoding configurations.

    Returns:
        list: Fine-tuning dataset [(prompt, response, label), ...].
    """
    alignment_data = []
    for prompt in prompts:
        for config in config_space:
            # Generate multiple responses for the prompt+config pair
            # (Simplified: generate one response per config here)
            inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
            output_ids = model.generate(
                inputs.input_ids,
                temperature=config.get('temperature', 0.7),
                top_k=config.get('top_k', 50),
                top_p=config.get('top_p', 0.9),
                do_sample=True,
                max_new_tokens=512,
            )
            response = tokenizer.decode(output_ids[0], skip_special_tokens=True)

            # Label the response
            is_misaligned = scorer.predict_misalignment(response) > 0.5 # Threshold
            label = "misaligned" if is_misaligned else "aligned"

            # Format for fine-tuning (example using hindsight prefix)
            if label == "aligned":
                formatted_prompt = prompt + "\nAn aligned answer:"
            else:
                formatted_prompt = prompt + "\nA misaligned answer:"
            alignment_data.append({"prompt": formatted_prompt, "response": response}) # Or other SFT formats

    return alignment_data

Experimental validation on LLaMA2-7B-chat demonstrated the effectiveness of this approach. Generation-aware alignment reduced the ASR under the generation exploitation attack from 95% down to 69%. This improvement was significantly better than a control alignment strategy that only used training examples generated with a fixed, default decoding setting, which only reduced the ASR to 88%.

Implications for Safety Evaluation and Alignment

The findings carry significant implications for the development and deployment of open-source LLMs:

  • Inadequacy of Current Safety Evaluations: Standard safety evaluations, often conducted using fixed default decoding parameters, provide an incomplete and potentially misleading assessment of model robustness. They fail to capture vulnerabilities exploitable by varying the generation process.
  • Need for Comprehensive Red Teaming: Effective red teaming must explore the impact of diverse decoding strategies, system prompt variations, and other generation-time manipulations, in addition to adversarial prompt crafting.
  • Alignment Generalization: Alignment procedures need to ensure robustness not just to input perturbations but also to variations in the output generation process. Generation-aware alignment offers one potential direction for achieving this.

The paper underscores a critical gap in the standard practices for ensuring LLM safety, advocating for a shift towards more holistic evaluation and alignment methodologies that explicitly consider the influence of generation parameters.

In conclusion, the research demonstrates that manipulating LLM generation strategies constitutes a potent and computationally efficient jailbreak vector, capable of inducing high rates of misalignment even in safety-aligned models. This highlights the necessity for evaluating and aligning models under diverse generation conditions, with generation-aware alignment presented as a viable mitigation technique.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (47)
  1. Alex Albert. Jailbreak chat, 2023. URL https://www.jailbreakchat.com/.
  2. Falcon-40B: an open large language model with state-of-the-art performance. 2023.
  3. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023.
  4. A general language assistant as a laboratory for alignment. arXiv preprint arXiv:2112.00861, 2021.
  5. Azure. Content filtering, 2023. URL https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/content-filter.
  6. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022a.
  7. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073, 2022b.
  8. Language models are few-shot learners. In NeurIPS, 2020.
  9. Are aligned neural networks adversarially aligned? arXiv preprint arXiv:2306.15447, 2023.
  10. Explore, establish, exploit: Red teaming language models from scratch. arXiv preprint arXiv:2306.09442, 2023.
  11. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https://lmsys.org/blog/2023-03-30-vicuna/.
  12. Deep reinforcement learning from human preferences. In NeurIPS, 2017.
  13. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022.
  14. Lavina Daryanani. How to jailbreak chatgpt., 2023. URL https://watcher.guru/news/how-to-jailbreak-chatgpt.
  15. Hierarchical neural story generation. In ACL, 2018.
  16. Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. arXiv preprint arXiv:2209.07858, 2022.
  17. Improving alignment of dialogue agents via targeted human judgements. arXiv preprint arXiv:2209.14375, 2022.
  18. Aligning language models with preferences through f-divergence minimization. arXiv preprint arXiv:2302.08215, 2023.
  19. Google. An important next step on our ai journey, 2023. URL https://blog.google/technology/ai/bard-google-ai-search-updates/.
  20. Gradient-based adversarial attacks against text transformers. In ACL, 2021.
  21. Julian Hazell. Large language models can be used to effectively scale spear phishing campaigns. arXiv preprint arXiv:2305.06972, 2023.
  22. The curious case of neural text degeneration. In ICLR, 2020.
  23. Automatically auditing large language models via discrete optimization. arXiv preprint arXiv:2303.04381, 2023.
  24. Exploiting programmatic behavior of llms: Dual-use through standard security attacks. arXiv preprint arXiv:2302.05733, 2023.
  25. Pretraining language models with human preferences. In ICML, 2023.
  26. Open sesame! universal black box jailbreaking of large language models, 2023.
  27. Multi-step jailbreaking privacy attacks on chatgpt. arXiv preprint arXiv:2304.05197, 2023.
  28. Languages are rewards: Hindsight finetuning using human feedback. arXiv preprint arXiv:2302.02676, 2023a.
  29. Jailbreaking chatgpt via prompt engineering: An empirical study. arXiv preprint arXiv:2305.13860, 2023b.
  30. MosaicML. Introducing mpt-7b: A new standard for open-source, commercially usable llms, 2023. URL www.mosaicml.com/blog/mpt-7b. Accessed: 2023-05-05.
  31. OpenAI. OpenAI: Introducing ChatGPT, 2022. URL https://openai.com/blog/chatgpt.
  32. Training language models to follow instructions with human feedback. NeurIPS, 2022.
  33. Red teaming language models with language models. In EMNLP, 2022.
  34. Latent jailbreak: A benchmark for evaluating text safety and output robustness of large language models, 2023.
  35. Improving language understanding by generative pre-training. Technical report, OpenAI, 2018.
  36. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
  37. ” do anything now”: Characterizing and evaluating in-the-wild jailbreak prompts on large language models. arXiv preprint arXiv:2308.03825, 2023.
  38. Autoprompt: Eliciting knowledge from language models with automatically generated prompts. arXiv preprint arXiv:2010.15980, 2020.
  39. Learning to summarize with human feedback. In NeurIPS, 2020.
  40. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023a.
  41. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023b.
  42. Aligning large language models with human: A survey. arXiv preprint arXiv:2307.12966, 2023.
  43. Jailbroken: How does llm safety training fail? arXiv preprint arXiv:2307.02483, 2023.
  44. Hard prompts made easy: Gradient-based discrete optimization for prompt tuning and discovery. arXiv preprint arXiv:2302.03668, 2023.
  45. Automatic perturbation analysis for scalable certified robustness and beyond. In NeurIPS, 2020.
  46. LIMA: Less is more for alignment. arXiv preprint arXiv:2305.11206, 2023.
  47. Universal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yangsibo Huang (40 papers)
  2. Samyak Gupta (5 papers)
  3. Mengzhou Xia (34 papers)
  4. Kai Li (313 papers)
  5. Danqi Chen (84 papers)
Citations (209)
Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com
Reddit Logo Streamline Icon: https://streamlinehq.com