Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient LLM Jailbreak via Adaptive Dense-to-sparse Constrained Optimization (2405.09113v1)

Published 15 May 2024 in cs.LG

Abstract: Recent research indicates that LLMs are susceptible to jailbreaking attacks that can generate harmful content. This paper introduces a novel token-level attack method, Adaptive Dense-to-Sparse Constrained Optimization (ADC), which effectively jailbreaks several open-source LLMs. Our approach relaxes the discrete jailbreak optimization into a continuous optimization and progressively increases the sparsity of the optimizing vectors. Consequently, our method effectively bridges the gap between discrete and continuous space optimization. Experimental results demonstrate that our method is more effective and efficient than existing token-level methods. On Harmbench, our method achieves state of the art attack success rate on seven out of eight LLMs. Code will be made available. Trigger Warning: This paper contains model behavior that can be offensive in nature.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (18)
  1. Improving question answering model robustness with synthetic adversarial data generation. arXiv preprint arXiv:2104.08678, 2021.
  2. Are aligned neural networks adversarially aligned? Advances in Neural Information Processing Systems, 36, 2024.
  3. Jailbreaking black box large language models in twenty queries. arXiv preprint arXiv:2310.08419, 2023.
  4. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. See https://vicuna. lmsys. org (accessed 14 April 2023), 2(3):6, 2023.
  5. Build it break it fix it for dialogue safety: Robustness from adversarial human attack. arXiv preprint arXiv:1908.06083, 2019.
  6. Automatically auditing large language models via discrete optimization. In International Conference on Machine Learning, pages 15307–15329. PMLR, 2023.
  7. Embracing large language models for medical applications: opportunities and challenges. Cureus, 15(5), 2023.
  8. Autodan: Generating stealthy jailbreak prompts on aligned large language models. arXiv preprint arXiv:2310.04451, 2023.
  9. Black box adversarial prompting for foundation models. arXiv preprint arXiv:2302.04237, 2023.
  10. Harmbench: A standardized evaluation framework for automated red teaming and robust refusal. arXiv preprint arXiv:2402.04249, 2024.
  11. Tree of attacks: Jailbreaking black-box llms automatically. arXiv preprint arXiv:2312.02119, 2023.
  12. OpenAI. Gpt-4 technical report. ArXiv, 2303, 2023.
  13. Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950, 2023.
  14. Autoprompt: Eliciting knowledge from language models with automatically generated prompts. arXiv preprint arXiv:2010.15980, 2020.
  15. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
  16. Zephyr: Direct distillation of lm alignment. arXiv preprint arXiv:2310.16944, 2023.
  17. Jailbroken: How does llm safety training fail? Advances in Neural Information Processing Systems, 36, 2024.
  18. Universal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Kai Hu (55 papers)
  2. Weichen Yu (8 papers)
  3. Tianjun Yao (6 papers)
  4. Xiang Li (1002 papers)
  5. Wenhe Liu (3 papers)
  6. Lijun Yu (22 papers)
  7. Yining Li (29 papers)
  8. Kai Chen (512 papers)
  9. Zhiqiang Shen (172 papers)
  10. Matt Fredrikson (44 papers)
Citations (3)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets