Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distinguishing Non-natural from Natural Adversarial Samples for More Robust Pre-trained Language Model (2203.11199v1)

Published 19 Mar 2022 in cs.LG, cs.CL, and cs.CR

Abstract: Recently, the problem of robustness of pre-trained LLMs (PrLMs) has received increasing research interest. Latest studies on adversarial attacks achieve high attack success rates against PrLMs, claiming that PrLMs are not robust. However, we find that the adversarial samples that PrLMs fail are mostly non-natural and do not appear in reality. We question the validity of current evaluation of robustness of PrLMs based on these non-natural adversarial samples and propose an anomaly detector to evaluate the robustness of PrLMs with more natural adversarial samples. We also investigate two applications of the anomaly detector: (1) In data augmentation, we employ the anomaly detector to force generating augmented data that are distinguished as non-natural, which brings larger gains to the accuracy of PrLMs. (2) We apply the anomaly detector to a defense framework to enhance the robustness of PrLMs. It can be used to defend all types of attacks and achieves higher accuracy on both adversarial samples and compliant samples than other defense frameworks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Jiayi Wang (74 papers)
  2. Rongzhou Bao (5 papers)
  3. Zhuosheng Zhang (125 papers)
  4. Hai Zhao (227 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.