Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Prompting-based Approach for Adversarial Example Generation and Robustness Enhancement (2203.10714v1)

Published 21 Mar 2022 in cs.CL

Abstract: Recent years have seen the wide application of NLP models in crucial areas such as finance, medical treatment, and news media, raising concerns of the model robustness and vulnerabilities. In this paper, we propose a novel prompt-based adversarial attack to compromise NLP models and robustness enhancement technique. We first construct malicious prompts for each instance and generate adversarial examples via mask-and-filling under the effect of a malicious purpose. Our attack technique targets the inherent vulnerabilities of NLP models, allowing us to generate samples even without interacting with the victim NLP model, as long as it is based on pre-trained LLMs (PLMs). Furthermore, we design a prompt-based adversarial training method to improve the robustness of PLMs. As our training method does not actually generate adversarial samples, it can be applied to large-scale training sets efficiently. The experimental results show that our attack method can achieve a high attack success rate with more diverse, fluent and natural adversarial examples. In addition, our robustness enhancement method can significantly improve the robustness of models to resist adversarial attacks. Our work indicates that prompting paradigm has great potential in probing some fundamental flaws of PLMs and fine-tuning them for downstream tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yuting Yang (45 papers)
  2. Pei Huang (21 papers)
  3. Juan Cao (73 papers)
  4. Jintao Li (44 papers)
  5. Yun Lin (45 papers)
  6. Jin Song Dong (49 papers)
  7. Feifei Ma (11 papers)
  8. Jian Zhang (543 papers)
Citations (13)

Summary

We haven't generated a summary for this paper yet.