Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Attack Prompt Generation for Red Teaming and Defending Large Language Models (2310.12505v1)

Published 19 Oct 2023 in cs.CL, cs.CR, and cs.LG

Abstract: LLMs are susceptible to red teaming attacks, which can induce LLMs to generate harmful content. Previous research constructs attack prompts via manual or automatic methods, which have their own limitations on construction cost and quality. To address these issues, we propose an integrated approach that combines manual and automatic methods to economically generate high-quality attack prompts. Specifically, considering the impressive capabilities of newly emerged LLMs, we propose an attack framework to instruct LLMs to mimic human-generated prompts through in-context learning. Furthermore, we propose a defense framework that fine-tunes victim LLMs through iterative interactions with the attack framework to enhance their safety against red teaming attacks. Extensive experiments on different LLMs validate the effectiveness of our proposed attack and defense frameworks. Additionally, we release a series of attack prompts datasets named SAP with varying sizes, facilitating the safety evaluation and enhancement of more LLMs. Our code and dataset is available on https://github.com/Aatrox103/SAP .

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Boyi Deng (4 papers)
  2. Wenjie Wang (150 papers)
  3. Fuli Feng (143 papers)
  4. Yang Deng (113 papers)
  5. Qifan Wang (129 papers)
  6. Xiangnan He (200 papers)
Citations (39)
Github Logo Streamline Icon: https://streamlinehq.com

GitHub