Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Word-level Textual Adversarial Attacking as Combinatorial Optimization (1910.12196v4)

Published 27 Oct 2019 in cs.CL, cs.AI, and cs.LG

Abstract: Adversarial attacks are carried out to reveal the vulnerability of deep neural networks. Textual adversarial attacking is challenging because text is discrete and a small perturbation can bring significant change to the original input. Word-level attacking, which can be regarded as a combinatorial optimization problem, is a well-studied class of textual attack methods. However, existing word-level attack models are far from perfect, largely because unsuitable search space reduction methods and inefficient optimization algorithms are employed. In this paper, we propose a novel attack model, which incorporates the sememe-based word substitution method and particle swarm optimization-based search algorithm to solve the two problems separately. We conduct exhaustive experiments to evaluate our attack model by attacking BiLSTM and BERT on three benchmark datasets. Experimental results demonstrate that our model consistently achieves much higher attack success rates and crafts more high-quality adversarial examples as compared to baseline methods. Also, further experiments show our model has higher transferability and can bring more robustness enhancement to victim models by adversarial training. All the code and data of this paper can be obtained on https://github.com/thunlp/SememePSO-Attack.

Overview of "Word-level Textual Adversarial Attacking as Combinatorial Optimization"

The paper entitled "Word-level Textual Adversarial Attacking as Combinatorial Optimization" introduces a novel methodological framework for improving the efficacy of adversarial attacks on text-based neural network models. The key contribution is to treat word-level adversarial attacks as a combinatorial optimization problem, addressing inefficiencies in existing models.

Methodological Innovation

The authors propose an approach that divides the adversarial attack process into two key steps:

  1. Search Space Reduction: The paper introduces a sememe-based word substitution method. Sememes, defined as the smallest semantic units in language, allow for higher-quality substitutions by focusing on semantic consistency. This method is noted to outperform others that rely on word embeddings or synonym databases like WordNet by generating more potential substitutes that preserve grammaticality and semantic intent.
  2. Adversarial Example Search Algorithm: The authors employ Particle Swarm Optimization (PSO) as a search algorithm for generating adversarial examples. PSO, compared to other strategies such as genetic algorithms or greedy algorithms, is shown to provide more efficient convergence in finding successful attacks, even under limited information about the victim models (black-box setting).

Empirical Evaluation

The paper extensively evaluates the proposed adversarial attack framework on BiLSTM and BERT models across three datasets: IMDB, SST-2, and SNLI. The success rates, adversarial example quality (measured in terms of modification rate, grammaticality, and fluency), attack validity, and transferability of adversarial examples are presented as key metrics.

  • The proposed model demonstrates significantly higher attack success rates across all tested models, with figures like 100% for BiLSTM on the IMDB dataset.
  • Compared to baseline methods, the Sememe+PSO approach achieves lower modification rates and grammatical error increases, and maintains better fluency in adversarial examples.
  • Human evaluation reveals that the validity of attacks, which represents semantic consistency of adversarial examples, is competitive with or superior to existing techniques.

Implications and Future Directions

This research has several important implications. The sememe-based substitution method's ability to generate semantically consistent adversarial examples could inspire further exploration into semantic-level attacks, particularly in contexts where linguistic nuances are crucial. Likewise, the application of PSO in adversarial settings offers a robust alternative to traditional genetic algorithms, suggesting potential cross-application in other domains beyond text.

Future work could delve into leveraging these semantically rich adversarial examples not only for testing model robustness but also in defensive training strategies to harden models against attacks. Moreover, enhancements in the transferability of adversarial examples to different model architectures demonstrate exciting prospects for developing more generalized adversarial evaluation benchmarks across diverse tasks within NLP and AI.

In summary, this paper contributes a substantial advancement in adversarial NLP by aligning methodology with semantic integrity and proposing an efficient optimization framework, paving the way for both defensive and offensive innovations in neural network-based LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yuan Zang (6 papers)
  2. Fanchao Qi (33 papers)
  3. Chenghao Yang (25 papers)
  4. Zhiyuan Liu (433 papers)
  5. Meng Zhang (184 papers)
  6. Qun Liu (230 papers)
  7. Maosong Sun (337 papers)
Citations (83)