Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Contextualized Perturbation for Textual Adversarial Attack (2009.07502v2)

Published 16 Sep 2020 in cs.CL

Abstract: Adversarial examples expose the vulnerabilities of NLP models, and can be used to evaluate and improve their robustness. Existing techniques of generating such examples are typically driven by local heuristic rules that are agnostic to the context, often resulting in unnatural and ungrammatical outputs. This paper presents CLARE, a ContextuaLized AdversaRial Example generation model that produces fluent and grammatical outputs through a mask-then-infill procedure. CLARE builds on a pre-trained masked LLM and modifies the inputs in a context-aware manner. We propose three contextualized perturbations, Replace, Insert and Merge, allowing for generating outputs of varied lengths. With a richer range of available strategies, CLARE is able to attack a victim model more efficiently with fewer edits. Extensive experiments and human evaluation demonstrate that CLARE outperforms the baselines in terms of attack success rate, textual similarity, fluency and grammaticality.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Dianqi Li (18 papers)
  2. Yizhe Zhang (127 papers)
  3. Hao Peng (291 papers)
  4. Liqun Chen (42 papers)
  5. Chris Brockett (37 papers)
  6. Ming-Ting Sun (16 papers)
  7. Bill Dolan (45 papers)
Citations (223)

Summary

  • The paper introduces CLARE, a novel model that employs replace, insert, and merge strategies for generating effective, contextually perturbed adversarial examples.
  • The methodology leverages a mask-then-infill procedure with pre-trained language models to ensure high semantic similarity and grammatical accuracy.
  • Quantitative results demonstrate that CLARE achieves superior attack success rates and maintains textual fluency, paving the way for more robust NLP defenses.

Contextualized Perturbation for Textual Adversarial Attack

The paper explores a significant challenge in the field of NLP—specifically, the generation of adversarial examples to evaluate and enhance the robustness of NLP systems. The authors introduce CLARE, a ContextuaLized AdversaRial Example generation model, which leverages a mask-then-infill procedure based on pre-trained masked LLMs. This approach enhances the fluency, grammaticality, and effectiveness of adversarial examples.

Key Innovations and Methodology

CLARE departs from traditional methods that often rely on heuristic, context-agnostic rules, such as synonym replacement. To address the shortcomings of these methods, which frequently lead to unnatural outputs, CLARE employs three core perturbation strategies:

  1. Replace: Substitutes a word with another, in a contextually aware manner.
  2. Insert: Adds a word without compromising the sentence structure.
  3. Merge: Combines two adjacent words into a single contextually appropriate word.

These perturbation strategies allow CLARE to produce adversarial examples of varied lengths, offering a flexible approach to perturb text inputs effectively with fewer edits compared to existing methods. The incorporation of a pre-trained model like RoBERTa ensures that the generated text maintains high levels of similarity to the original while achieving a higher attack success rate.

Quantitative Results and Comparative Analysis

The efficacy of CLARE is supported through extensive experimentation across diverse datasets, including text classification and natural language inference tasks. The model demonstrates superior performance relative to existing baselines in key metrics:

  • Attack Success Rate: CLARE consistently achieves a higher attack success rate, indicating its ability to produce adversarial examples that are more effective in deceiving NLP models.
  • Textual Similarity: The model excels in preserving the semantic content, reflected in the higher similarity scores.
  • Fluency and Grammaticality: Evaluations show reduced perplexity and grammatical errors, a testament to the quality of the generated text.

Moreover, in human evaluations, CLARE's adversarial examples were rated higher for maintaining meaning and grammatical accuracy compared to alternatives like TextFooler.

Implications and Future Directions

CLARE presents significant implications for the development of robust NLP systems. By producing more human-like adversarial examples, researchers can better understand model vulnerabilities and devise more effective defenses. In practical terms, the model's ability to produce cleaner adversarial text positions it as a tool for enhancing model training via adversarial training, thus improving overall robustness and performance.

Looking forward, this work opens avenues for further refinement of contextual adversarial methods, possibly extending to more nuanced language tasks such as dialogue systems or cross-lingual models. The integration of such adversarial methodologies in the training loop represents a frontier in model robustness, especially as NLP applications continue to gain complexity and prominence.

Overall, CLARE significantly advances the scope of adversarial example generation in NLP, offering a framework that balances effectiveness with linguistic integrity. The open-source release of its models paves the way for continued exploration and integration into diverse NLP endeavors.