Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Subtoxic Questions: Dive Into Attitude Change of LLM's Response in Jailbreak Attempts (2404.08309v1)

Published 12 Apr 2024 in cs.CR, cs.AI, and cs.CL

Abstract: As LLMs of Prompt Jailbreaking are getting more and more attention, it is of great significance to raise a generalized research paradigm to evaluate attack strengths and a basic model to conduct subtler experiments. In this paper, we propose a novel approach by focusing on a set of target questions that are inherently more sensitive to jailbreak prompts, aiming to circumvent the limitations posed by enhanced LLM security. Through designing and analyzing these sensitive questions, this paper reveals a more effective method of identifying vulnerabilities in LLMs, thereby contributing to the advancement of LLM security. This research not only challenges existing jailbreaking methodologies but also fortifies LLMs against potential exploits.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (11)
  1. Deng, G., “MasterKey: Automated Jailbreak Across Multiple Large Language Model Chatbots”, arXiv e-prints, 2023. doi:10.48550/arXiv.2307.08715.
  2. Zou, A., Wang, Z., Carlini, N., Nasr, M., Zico Kolter, J., and Fredrikson, M., “Universal and Transferable Adversarial Attacks on Aligned Language Models”, arXiv e-prints, 2023. doi:10.48550/arXiv.2307.15043.
  3. Liu, X., Xu, N., Chen, M., and Xiao, C., “AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Models”, arXiv e-prints, 2023. doi:10.48550/arXiv.2310.04451.
  4. Perez, E., “Red Teaming Language Models with Language Models”, arXiv e-prints, 2022. doi:10.48550/arXiv.2202.03286.
  5. Shen, X., Chen, Z., Backes, M., Shen, Y., and Zhang, Y., “”Do Anything Now”: Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models”, arXiv e-prints, 2023. doi:10.48550/arXiv.2308.03825.
  6. Schulhoff, S., “Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of LLMs through a Global Scale Prompt Hacking Competition”, arXiv e-prints, 2023. doi:10.48550/arXiv.2311.16119.
  7. Wolf, Y., Wies, N., Avnery, O., Levine, Y., and Shashua, A., “Fundamental Limitations of Alignment in Large Language Models”, arXiv e-prints, 2023. doi:10.48550/arXiv.2304.11082.
  8. Liu, Y., “Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study”, arXiv e-prints, 2023. doi:10.48550/arXiv.2305.13860.
  9. OpenAI, “GPT-4 Technical Report”, arXiv e-prints, 2023. doi:10.48550/arXiv.2303.08774.
  10. Ding, P., “A Wolf in Sheep’s Clothing: Generalized Nested Jailbreak Prompts can Fool Large Language Models Easily”, arXiv e-prints, 2023. doi:10.48550/arXiv.2311.08268.
  11. https://www.dropbox.com/scl/fo/dvhjujl2d9ofv7v833nlw/h?rlkey=mtpaw y31y4fqjtlfr22z1mi68&dl=0
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Tianyu Zhang (110 papers)
  2. Zixuan Zhao (11 papers)
  3. Jiaqi Huang (17 papers)
  4. Jingyu Hua (8 papers)
  5. Sheng Zhong (57 papers)