Jailbreaker in Jail: Moving Target Defense for Large Language Models (2310.02417v1)
Abstract: LLMs, known for their capability in understanding and following instructions, are vulnerable to adversarial attacks. Researchers have found that current commercial LLMs either fail to be "harmless" by presenting unethical answers, or fail to be "helpful" by refusing to offer meaningful answers when faced with adversarial queries. To strike a balance between being helpful and harmless, we design a moving target defense (MTD) enhanced LLM system. The system aims to deliver non-toxic answers that align with outputs from multiple model candidates, making them more robust against adversarial attacks. We design a query and output analysis model to filter out unsafe or non-responsive answers. %to achieve the two objectives of randomly selecting outputs from different LLMs. We evaluate over 8 most recent chatbot models with state-of-the-art adversarial queries. Our MTD-enhanced LLM system reduces the attack success rate from 37.5\% to 0\%. Meanwhile, it decreases the response refusal rate from 50\% to 0\%.
- Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862 (2022).
- Understanding Multi-Turn Toxic Behaviors in Open-Domain Chatbots. arXiv preprint arXiv:2307.09579 (2023).
- On Randomization in MTD Systems. In Proceedings of the 9th ACM Workshop on Moving Target Defense. 37–43.
- Hardware Moving Target Defenses against Physical Attacks: Design Challenges and Opportunities. In Proceedings of the 9th ACM Workshop on Moving Target Defense.
- Laiyer.ai. 2023. LLM Guard - The Security Toolkit for LLM Interactions. https://github.com/laiyer-ai/llm-guard.git.
- OpenAI. 2023. ChatGPT. chat.openai.com/. Accessed 16 Feb. 2023..
- Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35 (2022), 27730–27744.
- Jailbroken: How Does LLM Safety Training Fail? arXiv preprint arXiv:2307.02483 (2023).
- Universal and Transferable Adversarial Attacks on Aligned Language Models. arXiv preprint arXiv:2307.15043 (2023).
- Bocheng Chen (10 papers)
- Advait Paliwal (1 paper)
- Qiben Yan (40 papers)