Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PromptAttack: Prompt-based Attack for Language Models via Gradient Search (2209.01882v1)

Published 5 Sep 2022 in cs.CL, cs.AI, and cs.CR

Abstract: As the pre-trained LLMs (PLMs) continue to grow, so do the hardware and data requirements for fine-tuning PLMs. Therefore, the researchers have come up with a lighter method called \textit{Prompt Learning}. However, during the investigations, we observe that the prompt learning methods are vulnerable and can easily be attacked by some illegally constructed prompts, resulting in classification errors, and serious security problems for PLMs. Most of the current research ignores the security issue of prompt-based methods. Therefore, in this paper, we propose a malicious prompt template construction method (\textbf{PromptAttack}) to probe the security performance of PLMs. Several unfriendly template construction approaches are investigated to guide the model to misclassify the task. Extensive experiments on three datasets and three PLMs prove the effectiveness of our proposed approach PromptAttack. We also conduct experiments to verify that our method is applicable in few-shot scenarios.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yundi Shi (1 paper)
  2. Piji Li (75 papers)
  3. Changchun Yin (4 papers)
  4. Zhaoyang Han (7 papers)
  5. Lu Zhou (54 papers)
  6. Zhe Liu (234 papers)
Citations (14)

Summary

We haven't generated a summary for this paper yet.