Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Differentiable Language Model Adversarial Attack on Text Classifiers (2107.11275v1)

Published 23 Jul 2021 in cs.CL and cs.LG

Abstract: Robustness of huge Transformer-based models for natural language processing is an important issue due to their capabilities and wide adoption. One way to understand and improve robustness of these models is an exploration of an adversarial attack scenario: check if a small perturbation of an input can fool a model. Due to the discrete nature of textual data, gradient-based adversarial methods, widely used in computer vision, are not applicable per~se. The standard strategy to overcome this issue is to develop token-level transformations, which do not take the whole sentence into account. In this paper, we propose a new black-box sentence-level attack. Our method fine-tunes a pre-trained LLM to generate adversarial examples. A proposed differentiable loss function depends on a substitute classifier score and an approximate edit distance computed via a deep learning model. We show that the proposed attack outperforms competitors on a diverse set of NLP problems for both computed metrics and human evaluation. Moreover, due to the usage of the fine-tuned LLM, the generated adversarial examples are hard to detect, thus current models are not robust. Hence, it is difficult to defend from the proposed attack, which is not the case for other attacks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Ivan Fursov (3 papers)
  2. Alexey Zaytsev (61 papers)
  3. Pavel Burnyshev (4 papers)
  4. Ekaterina Dmitrieva (3 papers)
  5. Nikita Klyuchnikov (10 papers)
  6. Andrey Kravchenko (6 papers)
  7. Ekaterina Artemova (53 papers)
  8. Evgeny Burnaev (189 papers)
Citations (15)