Efficient LLM Jailbreak via Adaptive Dense-to-sparse Constrained Optimization (2405.09113v1)
Abstract: Recent research indicates that LLMs are susceptible to jailbreaking attacks that can generate harmful content. This paper introduces a novel token-level attack method, Adaptive Dense-to-Sparse Constrained Optimization (ADC), which effectively jailbreaks several open-source LLMs. Our approach relaxes the discrete jailbreak optimization into a continuous optimization and progressively increases the sparsity of the optimizing vectors. Consequently, our method effectively bridges the gap between discrete and continuous space optimization. Experimental results demonstrate that our method is more effective and efficient than existing token-level methods. On Harmbench, our method achieves state of the art attack success rate on seven out of eight LLMs. Code will be made available. Trigger Warning: This paper contains model behavior that can be offensive in nature.
- Improving question answering model robustness with synthetic adversarial data generation. arXiv preprint arXiv:2104.08678, 2021.
- Are aligned neural networks adversarially aligned? Advances in Neural Information Processing Systems, 36, 2024.
- Jailbreaking black box large language models in twenty queries. arXiv preprint arXiv:2310.08419, 2023.
- Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. See https://vicuna. lmsys. org (accessed 14 April 2023), 2(3):6, 2023.
- Build it break it fix it for dialogue safety: Robustness from adversarial human attack. arXiv preprint arXiv:1908.06083, 2019.
- Automatically auditing large language models via discrete optimization. In International Conference on Machine Learning, pages 15307–15329. PMLR, 2023.
- Embracing large language models for medical applications: opportunities and challenges. Cureus, 15(5), 2023.
- Autodan: Generating stealthy jailbreak prompts on aligned large language models. arXiv preprint arXiv:2310.04451, 2023.
- Black box adversarial prompting for foundation models. arXiv preprint arXiv:2302.04237, 2023.
- Harmbench: A standardized evaluation framework for automated red teaming and robust refusal. arXiv preprint arXiv:2402.04249, 2024.
- Tree of attacks: Jailbreaking black-box llms automatically. arXiv preprint arXiv:2312.02119, 2023.
- OpenAI. Gpt-4 technical report. ArXiv, 2303, 2023.
- Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950, 2023.
- Autoprompt: Eliciting knowledge from language models with automatically generated prompts. arXiv preprint arXiv:2010.15980, 2020.
- Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
- Zephyr: Direct distillation of lm alignment. arXiv preprint arXiv:2310.16944, 2023.
- Jailbroken: How does llm safety training fail? Advances in Neural Information Processing Systems, 36, 2024.
- Universal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043, 2023.
- Kai Hu (55 papers)
- Weichen Yu (8 papers)
- Tianjun Yao (6 papers)
- Xiang Li (1002 papers)
- Wenhe Liu (3 papers)
- Lijun Yu (22 papers)
- Yining Li (29 papers)
- Kai Chen (512 papers)
- Zhiqiang Shen (172 papers)
- Matt Fredrikson (44 papers)