Efficient and Stealthy Jailbreak Attacks via Adversarial Prompt Distillation from LLMs to SLMs (2506.17231v1)
Abstract: Attacks on LLMs in jailbreaking scenarios raise many security and ethical issues. Current jailbreak attack methods face problems such as low efficiency, high computational cost, and poor cross-model adaptability and versatility, which make it difficult to cope with the rapid development of LLM and new defense strategies. Our work proposes an Adversarial Prompt Distillation, which combines masked LLMing, reinforcement learning, and dynamic temperature control through a prompt generation and distillation method. It enables small LLMs (SLMs) to jailbreak attacks on mainstream LLMs. The experimental results verify the superiority of the proposed method in terms of attack success rate and harm, and reflect the resource efficiency and cross-model adaptability. This research explores the feasibility of distilling the jailbreak ability of LLM to SLM, reveals the model's vulnerability, and provides a new idea for LLM security research.