DistillSeq: A Framework for Safety Alignment Testing in Large Language Models using Knowledge Distillation (2407.10106v4)
Abstract: LLMs have showcased their remarkable capabilities in diverse domains, encompassing natural language understanding, translation, and even code generation. The potential for LLMs to generate harmful content is a significant concern. This risk necessitates rigorous testing and comprehensive evaluation of LLMs to ensure safe and responsible use. However, extensive testing of LLMs requires substantial computational resources, making it an expensive endeavor. Therefore, exploring cost-saving strategies during the testing phase is crucial to balance the need for thorough evaluation with the constraints of resource availability. To address this, our approach begins by transferring the moderation knowledge from an LLM to a small model. Subsequently, we deploy two distinct strategies for generating malicious queries: one based on a syntax tree approach, and the other leveraging an LLM-based method. Finally, our approach incorporates a sequential filter-test process designed to identify test cases that are prone to eliciting toxic responses. Our research evaluated the efficacy of DistillSeq across four LLMs: GPT-3.5, GPT-4.0, Vicuna-13B, and Llama-13B. In the absence of DistillSeq, the observed attack success rates on these LLMs stood at 31.5% for GPT-3.5, 21.4% for GPT-4.0, 28.3% for Vicuna-13B, and 30.9% for Llama-13B. However, upon the application of DistillSeq, these success rates notably increased to 58.5%, 50.7%, 52.5%, and 54.4%, respectively. This translated to an average escalation in attack success rate by a factor of 93.0% when compared to scenarios without the use of DistillSeq. Such findings highlight the significant enhancement DistillSeq offers in terms of reducing the time and resource investment required for effectively testing LLMs.
- andyll7772. Run a chatgpt-like chatbot on a single gpu with rocm. https://github.com/huggingface/blog/blob/main/chatbot-amd-gpu.md, October 2023.
- Badprompt: Backdoor attacks on continuous prompts. Advances in Neural Information Processing Systems 35 (2022), 37068–37080.
- Defending against alignment-breaking attacks via robustly aligned llm. arXiv preprint arXiv:2309.14348 (2023).
- Play guessing game with llm: Indirect jailbreak attack with implicit clues, 2024.
- Jailbreaking black box large language models in twenty queries. arXiv preprint arXiv:2310.08419 (2023).
- Safe rlhf: Safe reinforcement learning from human feedback. arXiv preprint arXiv:2310.12773 (2023).
- Jailbreaker: Automated jailbreak across multiple large language model chatbots. arXiv preprint arXiv:2307.08715 (2023).
- Pandora: Jailbreak gpts by retrieval augmented generation poisoning, 2024.
- BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) (Minneapolis, Minnesota, June 2019), Association for Computational Linguistics, pp. 4171–4186.
- Lmflow: An extensible toolkit for finetuning and inference of large foundation models. arXiv preprint arXiv:2306.12420 (2023).
- Ppt: Backdoor attacks on pre-trained models via poisoned prompt tuning. In IJCAI (2022), pp. 680–686.
- Realtoxicityprompts: Evaluating neural toxic degeneration in language models. arXiv preprint arXiv:2009.11462 (2020).
- Openwebtext corpus. http://Skylion007.github.io/OpenWebTextCorpus, 2019.
- From chatgpt to threatgpt: Impact of generative ai in cybersecurity and privacy. IEEE Access (2023).
- Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654 (2020).
- Aligning ai with shared human values. arXiv preprint arXiv:2008.02275 (2020).
- Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015).
- Catastrophic jailbreak of open-source llms via exploiting generation. arXiv preprint arXiv:2310.06987 (2023).
- Semantic-guided prompt organization for universal goal hijacking against llms. arXiv preprint arXiv:2405.14189 (2024).
- Transformers are short text classifiers: A study of inductive short text classifiers on benchmarks and real-world datasets. arXiv preprint arXiv:2211.16878 (2022).
- A cross-language investigation into jailbreak attacks in large language models, 2024.
- Halluvault: A novel logic programming-aided metamorphic testing framework for detecting fact-conflicting hallucinations in large language models. arXiv preprint arXiv:2405.00648 (2024).
- Lockpicking llms: A logit-based jailbreak using token-level manipulation. arXiv preprint arXiv:2405.13068 (2024).
- Autodan: Generating stealthy jailbreak prompts on aligned large language models. arXiv preprint arXiv:2310.04451 (2023).
- Prompt injection attack against llm-integrated applications. arXiv preprint arXiv:2306.05499 (2023).
- Jailbreaking chatgpt via prompt engineering: An empirical study. arXiv preprint arXiv:2305.13860 (2023).
- Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019).
- Groot: Adversarial testing for generative text-to-image models with tree-based semantic transformation, 2024.
- Meta. ”llama-13b”. https://github.com/facebookresearch/llama/tree/llama_v1.
- OpenAI. ”gpt-3.5 turbo”. https://platform.openai.com/docs/models/gpt-3-5.
- OpenAI. ”gpt-4”. https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo.
- OpenAI. Language models pricing. https://web.archive.org/web/20231031033745/https://openai.com/pricing.
- Ignore previous prompt: Attack techniques for language models. arXiv preprint arXiv:2211.09527 (2022).
- ” do anything now”: Characterizing and evaluating in-the-wild jailbreak prompts on large language models. arXiv preprint arXiv:2308.03825 (2023).
- Safety assessment of chinese large language models. arXiv preprint arXiv:2304.10436 (2023).
- Ernie 2.0: A continual pre-training framework for language understanding. In Proceedings of the AAAI conference on artificial intelligence (2020), vol. 34, pp. 8968–8975.
- Team, T. V. ”vicuna-13b”. https://github.com/lm-sys/FastChat.
- Universal adversarial triggers for attacking and analyzing NLP. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) (Hong Kong, China, Nov. 2019), Association for Computational Linguistics, pp. 2153–2162.
- Jailbroken: How does llm safety training fail? arXiv preprint arXiv:2307.02483 (2023).
- A comprehensive study of jailbreak attack versus defense for large language models, 2024.
- Fuzzllm: A novel and universal fuzzing framework for proactively discovering jailbreak vulnerabilities in large language models. arXiv preprint arXiv:2309.05274 (2023).
- Gptfuzzer: Red teaming large language models with auto-generated jailbreak prompts. arXiv preprint arXiv:2309.10253 (2023).
- Gpt-4 is too smart to be safe: Stealthy chat with llms via cipher. arXiv preprint arXiv:2308.06463 (2023).
- Exploring ai ethics of chatgpt: A diagnostic analysis. arXiv preprint arXiv:2301.12867 (2023).
- Universal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043 (2023).