Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Comprehensive Study of Jailbreak Attack versus Defense for Large Language Models (2402.13457v2)

Published 21 Feb 2024 in cs.CR and cs.AI
A Comprehensive Study of Jailbreak Attack versus Defense for Large Language Models

Abstract: LLMs have increasingly become central to generating content with potential societal impacts. Notably, these models have demonstrated capabilities for generating content that could be deemed harmful. To mitigate these risks, researchers have adopted safety training techniques to align model outputs with societal values to curb the generation of malicious content. However, the phenomenon of "jailbreaking", where carefully crafted prompts elicit harmful responses from models, persists as a significant challenge. This research conducts a comprehensive analysis of existing studies on jailbreaking LLMs and their defense techniques. We meticulously investigate nine attack techniques and seven defense techniques applied across three distinct LLMs: Vicuna, LLama, and GPT-3.5 Turbo. We aim to evaluate the effectiveness of these attack and defense techniques. Our findings reveal that existing white-box attacks underperform compared to universal techniques and that including special tokens in the input significantly affects the likelihood of successful attacks. This research highlights the need to concentrate on the security facets of LLMs. Additionally, we contribute to the field by releasing our datasets and testing framework, aiming to foster further research into LLM security. We believe these contributions will facilitate the exploration of security measures within this domain.

Comprehensive Analysis of Jailbreak Attack and Defense Techniques on LLMs

Background on Jailbreak Attacks

Jailbreak attacks constitute a significant vulnerability in LLMs, where carefully crafted prompts bypass the models' safety measures, inducing the generation of harmful content. This research offers a systematic evaluation of nine attack and seven defense techniques across three LLMs: Vicuna, LLama, and GPT-3.5 Turbo. Our objectives are to assess the efficacy of these techniques and to contribute to LLM security enhancement by releasing our datasets and testing framework.

Methodology

The paper begins with a selection phase for attack and defense techniques, emphasizing methods with accessible, open-source codes. Our investigation incorporates a benchmark rooted in previous studies, expanded through additional research, totaling 60 malicious queries. We employed a fine-tuned RoBERTa model, achieving a 92% accuracy in classifying malicious responses, supplemented by manual validation for reliability.

Findings on Jailbreak Attacks

Template-based methods, notably employing 78 templates, Jailbroken, and GPTFuzz strategies, showed elevated performance in bypassing GPT-3.5 Turbo and Vicuna. LLaMA, however, proved more resistant, with Jailbroken, Parameters, and 78 templates emerging as effective strategies. The analysis indicated that questions relating to harmful content and illegal activities presented substantial challenges across all models. Interestingly, white-box attacks were found less effective compared to universal, template-based methods.

Defense Technique Evaluations

Examining defense mechanisms highlighted the Bergeron method as the most robust strategy to date. Conversely, other evaluated defensive techniques were found lacking, as they were either too lenient or overly restrictive. The paper underscores the need for more sophisticated defense strategies and standardized evaluation methodologies for detecting jailbreak attempts.

Insights and Implications

The paper provides several notable insights:

  • Template-based methods are potent in jailbreak attempts.
  • White-box attacks underperform against universal strategies.
  • The need for more advanced and effective defense mechanisms is evident.
  • Special tokens significantly impact the success rates of attacks, with `[/INST]' being particularly influential in the LlaMa model.

Future Directions

The findings from this comprehensive paper emphasize the continuous need to refine both attack and defense strategies against jailbreak vulnerabilities in LLMs. Future research could benefit from expanding the scope to include larger models and exploring the impact of other special tokens on model vulnerability. Additionally, there's a promising avenue in developing a uniform baseline for jailbreak detection and more effective defense mechanisms, which could significantly contribute to the security and reliability of LLMs in various applications.

The raw data, benchmarks, and detailed findings of this paper are made publicly available to encourage further research and collaboration in enhancing the security measures of LLMs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (53)
  1. Google AI. Google ai palm 2. https://ai.google/discover/palm2/. Accessed: [Insert Access Date Here].
  2. Automorphic. 2023. Aegis. https://github.com/automorphic-ai/aegis. Accessed: 2024-02-13.
  3. Defending against alignment-breaking attacks via robustly aligned llm. arXiv preprint arXiv:2309.14348.
  4. Jailbreaking black box large language models in twenty queries. arXiv preprint arXiv:2310.08419.
  5. Jailbreaker: Automated jailbreak across multiple large language model chatbots. arXiv preprint arXiv:2307.08715.
  6. Multilingual jailbreak challenges in large language models. arXiv preprint arXiv:2310.06474.
  7. Raft: Reward ranked finetuning for generative foundation model alignment. arXiv preprint arXiv:2304.06767.
  8. Analyzing the inherent response tendency of llms: Real-world instructions-driven jailbreak. arXiv preprint arXiv:2312.04127.
  9. X. fine tuned. 2024. FT-Roberta-LLM: A Fine-Tuned Roberta Large Language Model. https://huggingface.co/zhx123/ftrobertallm/tree/main.
  10. Gemini. Buy, sell & trade bitcoin & other crypto currencies with gemini’s platform. https://www.gemini.com/eu. Accessed: [Insert Access Date Here].
  11. Llm self defense: By self examination, llms know they are being tricked. arXiv preprint arXiv:2308.07308.
  12. Token-level adversarial prompt detection based on perplexity measures and contextual information. arXiv preprint arXiv:2311.11509.
  13. Catastrophic jailbreak of open-source LLMs via exploiting generation. In The Twelfth International Conference on Learning Representations.
  14. Hugging Face. 2023a. Meta llama. https://huggingface.co/meta-llama. Accessed: 2024-02-14.
  15. Hugging Face. 2023b. Vicuna 7b v1.5. https://huggingface.co/lmsys/vicuna-7b-v1.5. Accessed: 2024-02-14.
  16. Baseline defenses for adversarial attacks against aligned language models. arXiv preprint arXiv:2309.00614.
  17. Exploiting programmatic behavior of llms: Dual-use through standard security attacks. arXiv preprint arXiv:2302.05733.
  18. Certifying llm safety against adversarial prompting. arXiv preprint arXiv:2309.02705.
  19. Open sesame! universal black box jailbreaking of large language models. arXiv preprint arXiv:2309.01446.
  20. Deepinception: Hypnotize large language model to be jailbreaker. arXiv preprint arXiv:2311.03191.
  21. Alpacaeval: An automatic evaluator of instruction-following models. https://github.com/tatsu-lab/alpaca_eval.
  22. Rain: Your language models can align themselves without finetuning. arXiv preprint arXiv:2309.07124.
  23. Autodan: Generating stealthy jailbreak prompts on aligned large language models. arXiv preprint arXiv:2310.04451.
  24. Jailbreaking chatgpt via prompt engineering: An empirical study. arXiv preprint arXiv:2305.13860.
  25. LMSYS. 2023. Vicuna 7b v1.5: A chat assistant fine-tuned on sharegpt conversations. https://huggingface.co/lmsys/vicuna-7b-v1.5. Accessed: [Insert access date here].
  26. Tree of attacks: Jailbreaking black-box llms automatically. arXiv preprint arXiv:2312.02119.
  27. Lever: Learning to verify language-to-code generation with execution. In International Conference on Machine Learning, pages 26106–26128. PMLR.
  28. OpenAI. 2023. Moderation guide. https://platform.openai.com/docs/guides/moderation. Accessed: 2024-02-13.
  29. OpenAI. 2023a. Openai pricing. https://openai.com/pricing. Accessed: 2024-02-14.
  30. OpenAI. 2023b. Research overview. https://openai.com/research/overview. Accessed: 2024-02-14.
  31. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744.
  32. OWASP. 2023. OWASP Top 10 for LLM Applications. https://owasp.org/www-project-top-10-for-large-language-model-applications/.
  33. Jatmo: Prompt injection defense by task-specific finetuning. arXiv preprint arXiv:2312.17673.
  34. Bergeron: Combating adversarial attacks through a conscience-based alignment framework. arXiv preprint arXiv:2312.00029.
  35. ProtectAI. 2023. Llm-guard. https://github.com/protectai/llm-guard. Accessed: 2024-02-13.
  36. Hijacking large language models via adversarial in-context learning. arXiv preprint arXiv:2311.09948.
  37. Smoothllm: Defending large language models against jailbreaking attacks. arXiv preprint arXiv:2310.03684.
  38. Adversarial attacks and defenses in large language models: Old and new threats. arXiv preprint arXiv:2310.19737.
  39. Loft: Local proxy fine-tuning for improving transferability of adversarial attacks against large language model. arXiv preprint arXiv:2310.04445.
  40. Preference ranking optimization for human alignment. arXiv preprint arXiv:2306.17492.
  41. Why do universal adversarial attacks work on large language models?: Geometry might be the answer. arXiv preprint arXiv:2309.00254.
  42. Opportunities and challenges for chatgpt and large language models in biomedicine and health. Briefings in Bioinformatics, 25(1):bbad493.
  43. Jailbroken: How does llm safety training fail? arXiv preprint arXiv:2307.02483.
  44. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837.
  45. Defending chatgpt against jailbreak attack via self-reminder.
  46. Jailbreaking gpt-4v via self-adversarial attacks with system prompts. arXiv preprint arXiv:2311.09127.
  47. Cognitive overload: Jailbreaking large language models with overloaded logical thinking. arXiv preprint arXiv:2311.09827.
  48. Fuzzllm: A novel and universal fuzzing framework for proactively discovering jailbreak vulnerabilities in large language models. arXiv preprint arXiv:2309.05274.
  49. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601.
  50. Low-resource languages jailbreak gpt-4. arXiv preprint arXiv:2310.02446.
  51. Gptfuzzer: Red teaming large language models with auto-generated jailbreak prompts. arXiv preprint arXiv:2309.10253.
  52. Defending large language models against jailbreaking attacks through goal prioritization. arXiv preprint arXiv:2311.09096.
  53. Universal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Zihao Xu (18 papers)
  2. Yi Liu (543 papers)
  3. Gelei Deng (35 papers)
  4. Yuekang Li (34 papers)
  5. Stjepan Picek (68 papers)
Citations (20)
X Twitter Logo Streamline Icon: https://streamlinehq.com