Subtoxic Questions: Dive Into Attitude Change of LLM's Response in Jailbreak Attempts (2404.08309v1)
Abstract: As LLMs of Prompt Jailbreaking are getting more and more attention, it is of great significance to raise a generalized research paradigm to evaluate attack strengths and a basic model to conduct subtler experiments. In this paper, we propose a novel approach by focusing on a set of target questions that are inherently more sensitive to jailbreak prompts, aiming to circumvent the limitations posed by enhanced LLM security. Through designing and analyzing these sensitive questions, this paper reveals a more effective method of identifying vulnerabilities in LLMs, thereby contributing to the advancement of LLM security. This research not only challenges existing jailbreaking methodologies but also fortifies LLMs against potential exploits.
- Deng, G., “MasterKey: Automated Jailbreak Across Multiple Large Language Model Chatbots”, arXiv e-prints, 2023. doi:10.48550/arXiv.2307.08715.
- Zou, A., Wang, Z., Carlini, N., Nasr, M., Zico Kolter, J., and Fredrikson, M., “Universal and Transferable Adversarial Attacks on Aligned Language Models”, arXiv e-prints, 2023. doi:10.48550/arXiv.2307.15043.
- Liu, X., Xu, N., Chen, M., and Xiao, C., “AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Models”, arXiv e-prints, 2023. doi:10.48550/arXiv.2310.04451.
- Perez, E., “Red Teaming Language Models with Language Models”, arXiv e-prints, 2022. doi:10.48550/arXiv.2202.03286.
- Shen, X., Chen, Z., Backes, M., Shen, Y., and Zhang, Y., “”Do Anything Now”: Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models”, arXiv e-prints, 2023. doi:10.48550/arXiv.2308.03825.
- Schulhoff, S., “Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of LLMs through a Global Scale Prompt Hacking Competition”, arXiv e-prints, 2023. doi:10.48550/arXiv.2311.16119.
- Wolf, Y., Wies, N., Avnery, O., Levine, Y., and Shashua, A., “Fundamental Limitations of Alignment in Large Language Models”, arXiv e-prints, 2023. doi:10.48550/arXiv.2304.11082.
- Liu, Y., “Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study”, arXiv e-prints, 2023. doi:10.48550/arXiv.2305.13860.
- OpenAI, “GPT-4 Technical Report”, arXiv e-prints, 2023. doi:10.48550/arXiv.2303.08774.
- Ding, P., “A Wolf in Sheep’s Clothing: Generalized Nested Jailbreak Prompts can Fool Large Language Models Easily”, arXiv e-prints, 2023. doi:10.48550/arXiv.2311.08268.
- https://www.dropbox.com/scl/fo/dvhjujl2d9ofv7v833nlw/h?rlkey=mtpaw y31y4fqjtlfr22z1mi68&dl=0
- Tianyu Zhang (111 papers)
- Zixuan Zhao (11 papers)
- Jiaqi Huang (17 papers)
- Jingyu Hua (8 papers)
- Sheng Zhong (57 papers)