Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Universal and Transferable Adversarial Attacks on Aligned Language Models (2307.15043v2)

Published 27 Jul 2023 in cs.CL, cs.AI, cs.CR, and cs.LG
Universal and Transferable Adversarial Attacks on Aligned Language Models

Abstract: Because "out-of-the-box" LLMs are capable of generating a great deal of objectionable content, recent work has focused on aligning these models in an attempt to prevent undesirable generation. While there has been some success at circumventing these measures -- so-called "jailbreaks" against LLMs -- these attacks have required significant human ingenuity and are brittle in practice. In this paper, we propose a simple and effective attack method that causes aligned LLMs to generate objectionable behaviors. Specifically, our approach finds a suffix that, when attached to a wide range of queries for an LLM to produce objectionable content, aims to maximize the probability that the model produces an affirmative response (rather than refusing to answer). However, instead of relying on manual engineering, our approach automatically produces these adversarial suffixes by a combination of greedy and gradient-based search techniques, and also improves over past automatic prompt generation methods. Surprisingly, we find that the adversarial prompts generated by our approach are quite transferable, including to black-box, publicly released LLMs. Specifically, we train an adversarial attack suffix on multiple prompts (i.e., queries asking for many different types of objectionable content), as well as multiple models (in our case, Vicuna-7B and 13B). When doing so, the resulting attack suffix is able to induce objectionable content in the public interfaces to ChatGPT, Bard, and Claude, as well as open source LLMs such as LLaMA-2-Chat, Pythia, Falcon, and others. In total, this work significantly advances the state-of-the-art in adversarial attacks against aligned LLMs, raising important questions about how such systems can be prevented from producing objectionable information. Code is available at github.com/LLM-attacks/LLM-attacks.

Introduction to Adversarial Attacks on LLMs

LLMs, such as GPT-3 and BERT, have advanced to a stage where they're increasingly being used in various applications, providing users with information, entertainment, and interaction. At the same time, it's crucial to ensure these models do not generate harmful or objectionable content. Organizations developing these models have put in considerable effort to "align" their outputs with socially acceptable standards. Despite these efforts, certain inputs, known as adversarial attacks, can lead to model misalignment, causing the generation of undesirable content. This article explores a new method that automates the process of creating these adversarial attacks, revealing vulnerabilities in these aligned models.

Crafting Automated Adversarial Prompts

Researchers have proposed a novel adversarial method that exploits the weaknesses in LLMs and provokes them into generating content that is generally filtered out for being objectionable. Unlike previous techniques, which mainly depended on human creativity and were not highly adaptable, the new method uses a clever combination of greedy and gradient-based techniques to automatically produce adversarial prompts. These prompts include a suffix that, when attached to otherwise innocuous queries, substantially increases the probability that the LLM wrongfully responds with harmful content. This method surpasses past automated prompt-generation methods by successfully inducing a range of LLMs to generate such objectionable content with high consistency.

Transferability of Adversarial Prompts

What makes these findings even more compelling is the high degree of transferability observed. The adversarial prompts designed for one model were found to be effective on others, including closed-source models available publicly, such as OpenAI's ChatGPT and Google's Bard. Specifically designed by optimizing against several smaller LLMs, these adversarial prompts maintain their efficacy when tested on larger and more sophisticated models. This surprising level of transferability highlights a broad vulnerability in LLMs, which raises important questions about the methods used to align them and their robustness against such insidious inputs.

Ethical Considerations and Potential Consequences

As one might expect, the ethical implications of this research are significant. The authors addressed this by engaging with various AI labs and sharing their findings before publication. Introducing these vulnerabilities into public discourse is critical, as understanding these potential attack vectors can lead to better defenses. Nonetheless, it is essential to note that the results also point to the need for a continued search for more secure and foolproof methods to prevent adversarial attacks on LLMs, which are becoming more integrated into our digital lives.

In conclusion, this paper marks a significant step forward in the field of machine learning security. By automating the generation of adversarial attacks and revealing their high transferability between models, it opens new avenues towards strengthening the alignment of LLMs, ensuring they adhere to ethical guidelines and resist manipulation despite the increasing complexity and the evolving landscape of AI-driven communication.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (49)
  1. Demystifying limited adversarial transferability in automatic speech recognition systems. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=l5aSHXi8jG5.
  2. Generating natural language adversarial examples. arXiv preprint arXiv:1804.07998, 2018.
  3. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022a.
  4. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073, 2022b.
  5. Pythia: A suite for analyzing large language models across training and scaling. In International Conference on Machine Learning, pages 2397–2430. PMLR, 2023.
  6. Evasion attacks against machine learning at test time. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2013, Prague, Czech Republic, September 23-27, 2013, Proceedings, Part III 13, pages 387–402. Springer, 2013.
  7. Adversarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM workshop on artificial intelligence and security, pages 3–14, 2017a.
  8. Towards evaluating the robustness of neural networks, 2017b.
  9. Are aligned neural networks adversarially aligned? arXiv preprint arXiv:2306.15447, 2023.
  10. CarperAI. Stable-vicuna 13b, 2023. URL https://huggingface.co/CarperAI/stable-vicuna-13b-delta.
  11. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30, 2017.
  12. Certified adversarial robustness via randomized smoothing. In international conference on machine learning. PMLR, 2019.
  13. Qlora: Efficient finetuning of quantized llms. arXiv preprint arXiv:2305.14314, 2023.
  14. Glm: General language model pretraining with autoregressive blank infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 320–335, 2022.
  15. Hotflip: White-box adversarial examples for text classification. arXiv preprint arXiv:1712.06751, 2017.
  16. Improving alignment of dialogue agents via targeted human judgements. arXiv preprint arXiv:2209.14375, 2022.
  17. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
  18. Gradient-based adversarial attacks against text transformers. arXiv preprint arXiv:2104.13733, 2021.
  19. Aligning {ai} with shared human values. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=dNy_RKzJacY.
  20. Adversarial examples are not bugs, they are features. In Advances in Neural Information Processing Systems (NeurIPS), 2019.
  21. Adversarial examples for evaluating reading comprehension systems. arXiv preprint arXiv:1707.07328, 2017.
  22. Automatically auditing large language models via discrete optimization. arXiv preprint arXiv:2303.04381, 2023.
  23. Pretraining language models with human preferences. In International Conference on Machine Learning, pages 17506–17533. PMLR, 2023.
  24. Scalable agent alignment via reward modeling: a research direction. arXiv preprint arXiv:1811.07871, 2018.
  25. Globally-robust neural networks. In International Conference on Machine Learning. PMLR, 2021.
  26. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691, 2021.
  27. Sok: Certified robustness for deep neural networks. In 2023 IEEE Symposium on Security and Privacy (SP), 2023.
  28. Exploring targeted universal adversarial perturbations to end-to-end asr models. arXiv preprint arXiv:2104.02757, 2021.
  29. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=rJzIBfZAb.
  30. Adversarial prompting for black box foundation models. arXiv preprint arXiv:2302.04237, 2023.
  31. Universal adversarial perturbations. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pages 1765–1773, 2017.
  32. Universal adversarial perturbations for speech recognition systems. arXiv preprint arXiv:1905.03828, 2019.
  33. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744, 2022.
  34. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277, 2016a.
  35. The limitations of deep learning in adversarial settings. In 2016 IEEE European symposium on security and privacy (EuroS&P), 2016b.
  36. The RefinedWeb dataset for Falcon LLM: outperforming curated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116, 2023. URL https://arxiv.org/abs/2306.01116.
  37. Cold decoding: Energy-based constrained text generation with langevin dynamics. Advances in Neural Information Processing Systems, 35:9538–9551, 2022.
  38. Autoprompt: Eliciting knowledge from language models with automatically generated prompts. arXiv preprint arXiv:2010.15980, 2020.
  39. Intriguing properties of neural networks. In International Conference on Learning Representations, 2014.
  40. MosaicML NLP Team. Introducing mpt-7b: A new standard for open-source, commercially usable llms, 2023. URL www.mosaicml.com/blog/mpt-7b. Accessed: 2023-05-05.
  41. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
  42. The space of transferable adversarial examples. arXiv preprint arXiv:1704.03453, 2017.
  43. Universal adversarial triggers for attacking and analyzing nlp. arXiv preprint arXiv:1908.07125, 2019.
  44. Adversarial glue: A multi-task benchmark for robustness evaluation of language models. arXiv preprint arXiv:2111.02840, 2021.
  45. Jailbroken: How does llm safety training fail? arXiv preprint arXiv:2307.02483, 2023.
  46. Hard prompts made easy: Gradient-based discrete optimization for prompt tuning and discovery. arXiv preprint arXiv:2302.03668, 2023.
  47. Fundamental limitations of alignment in large language models. arXiv preprint arXiv:2304.11082, 2023.
  48. Judging llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685, 2023.
  49. Promptbench: Towards evaluating the robustness of large language models on adversarial prompts. arXiv preprint arXiv:2306.04528, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Andy Zou (23 papers)
  2. Zifan Wang (75 papers)
  3. J. Zico Kolter (151 papers)
  4. Matt Fredrikson (44 papers)
  5. Nicholas Carlini (101 papers)
  6. Milad Nasr (48 papers)
Citations (936)
Github Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com
Reddit Logo Streamline Icon: https://streamlinehq.com

Reddit

  1. DiffusionAttacker. Thoughts? (1 point, 2 comments)